Evolutionary Action Selection for Gradient-Based Policy Learning

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) have recently been integrated to take advantage of both methods for better exploration and exploitation. The evolutionary part of these hybrid methods maintains a population of policy networks. However, existing methods focus on optimizing the parameters of policy network, which is usually high-dimensional and tricky for EA. In this paper, we shift the target of evolution from high-dimensional parameter space to low-dimensional action space. We propose Evolutionary Action Selection-Twin Delayed Deep Deterministic Policy Gradient (EAS-TD3), a novel hybrid method of EA and DRL. In EAS, we focus on optimizing the action chosen by the policy network and attempt to obtain high-quality actions to promote policy learning through an evolutionary algorithm. We conduct several experiments on challenging continuous control tasks. The result shows that EAS-TD3 shows superior performance over other state-of-art methods.

Cite

CITATION STYLE

APA

Ma, Y., Liu, T., Wei, B., Liu, Y., Xu, K., & Li, W. (2023). Evolutionary Action Selection for Gradient-Based Policy Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13625 LNCS, pp. 579–590). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-30111-7_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free