VALID: A new practical audio-visual database, and comparative results

35Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of deployed audio, face, and multi-modal person recognition systems in non-controlled scenarios, is typically lower than systems developed in highly controlled environments. With the aim to facilitate the development of robust audio, face, and multi-modal person recognition systems, the new large and realistic multi-modal (audio-visual) VALID database was acquired in a noisy "real world" office scenario with no control on illumination or acoustic noise. In this paper we describe the acquisition and content of the VALID database, consisting of five recording sessions of 106 subjects over a period of one month. Speaker identification experiments using visual speech features extracted from the mouth region are reported. The performance based on the uncontrolled VALID database is compared with that of the controlled XM2VTS database. The best VALID and XM2VTS based accuracies are 63.21% and 97.17% respectively. This highlights the degrading effect of an uncontrolled illumination environment and the importance of this database for deploying real world applications. The VALID database is available to the academic community through http://ee.ucd.ie/validdb/. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Fox, N. A., O’Mullane, B. A., & Reilly, R. B. (2005). VALID: A new practical audio-visual database, and comparative results. In Lecture Notes in Computer Science (Vol. 3546, pp. 777–786). Springer Verlag. https://doi.org/10.1007/11527923_81

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free