Authors
Conor F Hayes, Mathieu Reymond, Diederik M Roijers, Enda Howley, Patrick Mannion
Publication date
2021/5/3
Conference
Adaptive and Learning Agents Workshop (at AAMAS 2021)
Description
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from the single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. When making a decision, just the expected return -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Our key insight is that we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time. In this paper, we propose Distributional Monte Carlo Tree Search, an algorithm that learns a posterior distribution over the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Moreover, our algorithm outperforms the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
Total citations
20212022202320245561
Scholar articles
CF Hayes, M Reymond, DM Roijers, E Howley… - arXiv preprint arXiv:2102.00966, 2021