Multimodal Emotion Recognition Using Deep Neural Networks

109Citations
Citations of this article
162Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The change of emotions is a temporal dependent process. In this paper, a Bimodal-LSTM model is introduced to take temporal information into account for emotion recognition with multimodal signals. We extend the implementation of denoising autoencoders and adopt the Bimodal Deep Denoising AutoEncoder modal. Both models are evaluated on a public dataset, SEED, using EEG features and eye movement features as inputs. Our experimental results indicate that the Bimodal-LSTM model outperforms other state-of-the-art methods with a mean accuracy of 93.97%. The Bimodal-LSTM model is also examined on DEAP dataset with EEG and peripheral physiological signals, and it achieves the state-of-the-art results with a mean accuracy of 83.53%.

Cite

CITATION STYLE

APA

Tang, H., Liu, W., Zheng, W. L., & Lu, B. L. (2017). Multimodal Emotion Recognition Using Deep Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10637 LNCS, pp. 811–819). Springer Verlag. https://doi.org/10.1007/978-3-319-70093-9_86

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free