Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e.g., image classification, visual grounding, and cross-modal retrieval. In this work, we establish a connection between multimodal representation learning and multiple instance learning. Based on this connection, we propose a generic framework for constructing permutation-invariant score functions with many existing multimodal representation learning approaches as special cases. Furthermore, we use the framework to derive a novel contrastive learning approach and demonstrate that our method achieves state-of-the-art results in several downstream tasks.
CITATION STYLE
Wang, P., Wells, W. M., Berkowitz, S., Horng, S., & Golland, P. (2023). Using Multiple Instance Learning to Build Multimodal Representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13939 LNCS, pp. 457–470). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-34048-2_35
Mendeley helps you to discover research relevant for your work.