An Approach for Improving Automatic Mouth Emotion Recognition

18Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The study proposes and tests a technique for automated emotion recognition through mouth detection via Convolutional Neural Networks (CNN), meant to be applied for supporting people with health disorders with communication skills issues (e.g. muscle wasting, stroke, autism, or, more simply, pain) in order to recognize emotions and generate real-time feedback, or data feeding supporting systems. The software system starts the computation identifying if a face is present on the acquired image, then it looks for the mouth location and extracts the corresponding features. Both tasks are carried out using Haar Feature-based Classifiers, which guarantee fast execution and promising performance. If our previous works focused on visual micro-expressions for personalized training on a single user, this strategy aims to train the system also on generalized faces data sets.

Cite

CITATION STYLE

APA

Biondi, G., Franzoni, V., Gervasi, O., & Perri, D. (2019). An Approach for Improving Automatic Mouth Emotion Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11619 LNCS, pp. 649–664). Springer Verlag. https://doi.org/10.1007/978-3-030-24289-3_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free