Authors
Giuseppe Paolo, Alban Laflaquiere, Alexandre Coninx, Stephane Doncieux
Publication date
2020/5/31
Conference
2020 IEEE International Conference on Robotics and Automation (ICRA)
Pages
2379-2385
Publisher
IEEE
Description
Performing Reinforcement Learning in sparse rewards settings, with very little prior knowledge, is a challenging problem since there is no signal to properly guide the learning process. In such situations, a good search strategy is fundamental. At the same time, not having to adapt the algorithm to every single problem is very desirable. Here we introduce TAXONS, a Task Agnostic eXploration of Outcome spaces through Novelty and Surprise algorithm. Based on a population-based divergent-search approach, it learns a set of diverse policies directly from high-dimensional observations, without any task-specific information. TAXONS builds a repertoire of policies while training an autoencoder on the high-dimensional observation of the final state of the system to build a low-dimensional outcome space. The learned outcome space, combined with the reconstruction error, is used to drive the search for new policies …
Total citations
20202021202220232024251265
Scholar articles
G Paolo, A Laflaquiere, A Coninx, S Doncieux - 2020 IEEE International Conference on Robotics and …, 2020