NeoNav: Improving the generalization of visual navigation via generating next expected observations

16Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

We propose improving the cross-Target and cross-scene generalization of visual navigation through learning an agent that is guided by conceiving the next observations it expects to see. This is achieved by learning a variational Bayesian model, called NeoNav, which generates the next expected observations (NEO) conditioned on the current observations of the agent and the target view. Our generative model is learned through optimizing a variational objective encompassing two key designs. First, the latent distribution is conditioned on current observations and the target view, leading to a modelbased, target-driven navigation. Second, the latent space is modeled with a Mixture of Gaussians conditioned on the current observation and the next best action. Our use of mixtureof-posteriors prior effectively alleviates the issue of overregularized latent space, thus significantly boosting the model generalization for new targets and in novel scenes. Moreover, the NEO generation models the forward dynamics of agentenvironment interaction, which improves the quality of approximate inference and hence benefits data efficiency. We have conducted extensive evaluations on both real-world and synthetic benchmarks, and show that our model consistently outperforms the state-of-The-Art models in terms of success rate, data efficiency, and generalization.

References Powered by Scopus

Human-level control through deep reinforcement learning

22581Citations
N/AReaders
Get full text

Target-driven visual navigation in indoor scenes using deep reinforcement learning

1209Citations
N/AReaders
Get full text

Curiosity-Driven Exploration by Self-Supervised Prediction

522Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning

33Citations
N/AReaders
Get full text

Visual language navigation: a survey and open challenges

24Citations
N/AReaders
Get full text

Reinforcement Learning-Based Visual Navigation with Information-Theoretic Regularization

23Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wu, Q., Manocha, D., Wang, J., & Xu, K. (2020). NeoNav: Improving the generalization of visual navigation via generating next expected observations. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 10001–10008). AAAI press. https://doi.org/10.1609/aaai.v34i06.6556

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 8

67%

Lecturer / Post doc 2

17%

Researcher 2

17%

Readers' Discipline

Tooltip

Computer Science 11

73%

Business, Management and Accounting 2

13%

Engineering 1

7%

Psychology 1

7%

Save time finding and organizing research with Mendeley

Sign up for free