Contextual-Based Image Inpainting: Infer, Match, and Translate

168Citations
Citations of this article
238Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of high-dimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.

Cite

CITATION STYLE

APA

Song, Y., Yang, C., Lin, Z., Liu, X., Huang, Q., Li, H., & Kuo, C. C. J. (2018). Contextual-Based Image Inpainting: Infer, Match, and Translate. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11206 LNCS, pp. 3–18). Springer Verlag. https://doi.org/10.1007/978-3-030-01216-8_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free