Dynamic visual time context descriptors for automatic human expression classification

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose two fast dynamic descriptors Vertical-Time- Backward (VTB) and Vertical-Time-Forward (VTF) on spatial-temporal domain to catch the cues of essential facial movements. These dynamic descriptors are used in a two-step system to recognize human facial expression within image sequences. In the first step, the system classifies static images and then it identifies the whole sequence. After combining the visual-time context features with popular LBP, the system can efficiently recognize the expression in a single image, and is especially helpful in highly ambiguous ones. In the second step, we use the evaluation method through the weighted probabilities of all frames to predict the class of the whole sequence. The experiments were performed on 348 sequences from 95 subjects in Cohn-Kanade database and obtained good results as high as 97.6 % in seven-class recognition for frames and 95.7 % in six class for sequences. © Springer-Verlag Berlin Heidelberg 2014.

Cite

CITATION STYLE

APA

Ji, Y., Gong, S., & Liu, C. (2014). Dynamic visual time context descriptors for automatic human expression classification. Advances in Intelligent Systems and Computing, 215, 307–316. https://doi.org/10.1007/978-3-642-37835-5_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free