Analysis of temporal coherence in videos for action recognition

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes an approach to improve the performance of activity recognition methods by analyzing the coherence of the frames in the input videos and then modeling the evolution of the coherent frames, which constitute a sub-sequence, to learn a representation for the videos. The proposed method consist of three steps: coherence analysis, representation leaning and classification. Using two state-of-the-art datasets (Hollywood2 and HMDB51), we demonstrate that learning the evolution of subsequences in lieu of frames, improves the recognition results and makes actions classification faster.

References Powered by Scopus

Large-scale video classification with convolutional neural networks

5654Citations
N/AReaders
Get full text

HMDB: A large video database for human motion recognition

3183Citations
N/AReaders
Get full text

Learning realistic human actions from movies

3082Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Saleh, A., Abdel-Nasser, M., Akram, F., Garcia, M. A., & Puig, D. (2016). Analysis of temporal coherence in videos for action recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9730, pp. 325–332). Springer Verlag. https://doi.org/10.1007/978-3-319-41501-7_37

Readers over time

‘16‘17‘19‘2100.511.52

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

100%

Readers' Discipline

Tooltip

Computer Science 2

67%

Neuroscience 1

33%

Save time finding and organizing research with Mendeley

Sign up for free
0