Feed-forward learning: Fast reinforcement learning of controllers

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reinforcement Learning (RL) approaches are, very often, rendered useless by the statistics of the required sampling process. This paper shows how very fast RL is essentially made possible by abandoning the state feedback during training episodes. The resulting new method, feed-forward learning (FF learning), employs a return estimator for pairs of a state and a feed-forward policy's parameter vector. FF learning is particularly suitable for the learning of controllers, e.g. for robotics applications, and yields learning rates unprecedented in the RL context. This paper introduces the method formally and proves a lower bound on its performance. Practical results are provided from applying FF learning to several scenarios based on the collision avoidance behavior of a mobile robot. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Musial, M., & Lemke, F. (2007). Feed-forward learning: Fast reinforcement learning of controllers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4528 LNCS, pp. 277–286). Springer Verlag. https://doi.org/10.1007/978-3-540-73055-2_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free