Authors
Xiaoqiang Wu, Qingling Zhu, Qiuzhen Lin, Weineng Chen, Jianqiang Li
Publication date
2024/5/6
Book
Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems
Pages
1947-1955
Description
Evolutionary reinforcement learning algorithms (ERLs), which combine evolutionary algorithms (EAs) with reinforcement learning (RL), have demonstrated significant success in enhancing RL performance. However, most ERLs rely heavily on Gaussian mutation operators to generate new individuals. When the standard deviation is too large or small, this approach will result in the production of poor or highly similar offspring. Such outcomes can be detrimental to the learning process of the RL agent, as too many poor or similar experiences are generated by these individuals. In order to alleviate these issues, this paper proposes an Adaptive Evolutionary Reinforcement Learning (AERL) method that adaptively adjusts both the standard deviation and the evaluation process. By tracking the performance of new individuals, AERL maintains the mutation strength within a suitable range without the need for additional …
Scholar articles
X Wu, Q Zhu, Q Lin, W Chen, J Li - Proceedings of the 23rd International Conference on …, 2024