This paper proposes an approach to improve the performance of activity recognition methods by analyzing the coherence of the frames in the input videos and then modeling the evolution of the coherent frames, which constitute a sub-sequence, to learn a representation for the videos. The proposed method consist of three steps: coherence analysis, representation leaning and classification. Using two state-of-the-art datasets (Hollywood2 and HMDB51), we demonstrate that learning the evolution of subsequences in lieu of frames, improves the recognition results and makes actions classification faster.
CITATION STYLE
Saleh, A., Abdel-Nasser, M., Akram, F., Garcia, M. A., & Puig, D. (2016). Analysis of temporal coherence in videos for action recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9730, pp. 325–332). Springer Verlag. https://doi.org/10.1007/978-3-319-41501-7_37
Mendeley helps you to discover research relevant for your work.