Robust multimodal dictionary learning

13Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and refinements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Cao, T., Jojic, V., Modla, S., Powell, D., Czymmek, K., & Niethammer, M. (2013). Robust multimodal dictionary learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8149 LNCS, pp. 259–266). https://doi.org/10.1007/978-3-642-40811-3_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free