Using Multiple Instance Learning to Build Multimodal Representations

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e.g., image classification, visual grounding, and cross-modal retrieval. In this work, we establish a connection between multimodal representation learning and multiple instance learning. Based on this connection, we propose a generic framework for constructing permutation-invariant score functions with many existing multimodal representation learning approaches as special cases. Furthermore, we use the framework to derive a novel contrastive learning approach and demonstrate that our method achieves state-of-the-art results in several downstream tasks.

Cite

CITATION STYLE

APA

Wang, P., Wells, W. M., Berkowitz, S., Horng, S., & Golland, P. (2023). Using Multiple Instance Learning to Build Multimodal Representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13939 LNCS, pp. 457–470). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-34048-2_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free