Authors
Frans A Oliehoek, Matthijs TJ Spaan, Shimon Whiteson, Nikos Vlassis
Publication date
2008/5/12
Conference
Proceedings of the 7th international conference on Autonomous agents and multiagent systems-Volume 1
Pages
517-524
Publisher
International Foundation for Autonomous Agents and
Description
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute an expressive framework for multiagent planning under uncertainty, but solving them is provably intractable. We demonstrate how their scalability can be improved by exploiting locality of interaction between agents in a factored representation. Factored Dec-POMDP representations have been proposed before, but only for Dec-POMDPs whose transition and observation models are fully independent. Such strong assumptions simplify the planning problem, but result in models with limited applicability. By contrast, we consider general factored Dec-POMDPs for which we analyze the model dependencies over space (locality of interaction) and time (horizon of the problem). We also present a formulation of decomposable value functions. Together, our results allow us to exploit the problem structure as well as heuristics in a single framework that is based on collaborative graphical Bayesian games (CGBGs). A preliminary experiment shows a speedup of two orders of magnitude.
Total citations
200820092010201120122013201420152016201720182019202020212022202320246892115122111143811101033
Scholar articles
FA Oliehoek, MTJ Spaan, N Vlassis, S Whiteson - Int. Joint Conf. on Autonomous Agents and Multi-Agent …, 2008