Authors
Tom Schaul, Daniel Horgan, Karol Gregor, David Silver
Publication date
2015
Conference
International Conference on Machine Learning (ICML-15)
Pages
1312-1320
Description
Value functions are a core component of reinforcement learning. The main idea is to to construct a single function approximator V (s; theta) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; theta) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.
Total citations
20162017201820192020202120222023202413429114919220321920794
Scholar articles
T Schaul, D Horgan, K Gregor, D Silver - International conference on machine learning, 2015
T Schaul, D Horgan, K Gregor - Universal value function approximators. In< italic> …