Creating efficient codebooks for visual recognition

634Citations
Citations of this article
360Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on keypoints or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or Mutual Information based feature selection starting from a dense codebook further improves the performance. © 2005 IEEE.

Cite

CITATION STYLE

APA

Jurie, F., & Triggs, B. (2005). Creating efficient codebooks for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (Vol. I, pp. 604–610). https://doi.org/10.1109/ICCV.2005.66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free