Neural Image Representations for Multi-image Fusion and Layer Separation

4Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a framework for aligning and fusing multiple images into a single view using neural image representations (NIRs), also known as implicit or coordinate-based neural representations. Our framework targets burst images that exhibit camera ego motion and potential changes in the scene. We describe different strategies for alignment depending on the nature of the scene motion—namely, perspective planar (i.e., homography), optical flow with minimal scene change, and optical flow with notable occlusion and disocclusion. With the neural image representation, our framework effectively combines multiple inputs into a single canonical view without the need for selecting one of the images as a reference frame. We demonstrate how to use this multi-frame fusion framework for various layer separation tasks. The code and results are available at https://shnnam.github.io/research/nir.

Cite

CITATION STYLE

APA

Nam, S., Brubaker, M. A., & Brown, M. S. (2022). Neural Image Representations for Multi-image Fusion and Layer Separation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13667 LNCS, pp. 216–232). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20071-7_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free