Annotating news video with locations

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The location of video scenes is an important semantic descriptor especially for broadcast news video. In this paper, we propose a learning-based approach to annotate shots of news video with locations extracted from video transcript, based on features from multiple video modalities including syntactic structure of transcript sentences, speaker identity, temporal video structure, and so on. Machine learning algorithms are adopted to combine multi-modal features to solve two sub-problems: (1) whether the location of a video shot is mentioned in the transcript, and if so, (2) among many locations in the transcript, which are correct one(s) for this shot. Experiments on TRECVID dataset demonstrate that our approach achieves approximately 85% accuracy in correctly labeling the location of any shot in news video. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Yang, J., & Hauptmarm, A. G. (2006). Annotating news video with locations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4071 LNCS, pp. 153–162). Springer Verlag. https://doi.org/10.1007/11788034_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free