Speech emotion recognition using multiple classifiers

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The research topic of how to automatically identify the emotional state of speakers received much attention. In this paper, we mainly focus on speech emotion recognition and develop an audio-based classification framework for identifying five different emotions in our audio database where the audio segments are from Chinese TV plays. First, acoustic features were extracted from the audio segments using Wavelet analysis, then feature selection is implemented based on Information gain and Sequential Forward Selection in the purpose of reducing irrelevant information as well as dimension reduction. Our classification framework is constructed over three base classifiers: SVM, Adaboost and Randomforest. Considering of the fact that a single classifier is in the limitation of recognition capability, decision fusion methods are applied to aggregate different prediction labels. According to the experiment on our database, the fusion methods we proposed show better performance.

Cite

CITATION STYLE

APA

Wang, K., Chu, Z., Wang, K., Yu, T., & Liu, L. (2017). Speech emotion recognition using multiple classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10612 LNCS, pp. 84–93). Springer Verlag. https://doi.org/10.1007/978-3-319-69781-9_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free