D-CIS Publication Database


Type of publication:Inproceedings
Entered by:JOSM
TitleDec-POMDPs with delayed communication
Bibtex cite ID
Booktitle Proceedings of the Workshop on Multi-agent Sequential Decision Making in Uncertain Domains (MSDM), at the 6th International Joint Conference on Autonomous Agents and Multiagents Systems (AAMAS 2007)
Year published 2007
Month May
Location 14-18 May 2007, Honolulu, Hawaii
Keywords Dec-POMDPs,delayed communication
In this work we consider the problem of multiagent planning under sensing and acting uncertainty with a one time-step delay in communication. We adopt decentralized partially observable Markov processes (Dec-POMDPs) as our planning framework. When instantaneous and noise-free communication is available, agents can instantly share local observations. This effectively reduces the decentralized planning problem to a centralized one, with a significant decrease in planning complexity. However, instantaneous communication is a strong assumption, as it requires the agents to synchronize at every time step. Therefore, we explore planning in Dec-POMDP settings in which communication is delayed by one time step. We show that such situations can be modeled by Bayesian games in which the types of the agents are defined by their last private observation. We will apply Bayesian games to define a value function QBG on the joint belief space, and we will show that it is the optimal payoff function for our Dec-POMDP setting with one time-step delayed communication. The QBG -value function is piecewise linear and convex over the joint belief space, which we will use to define QBG -value iteration. Finally, we will adapt Perseus, an approximate POMDP solver, to compute QBG -value functions, and we will use it to perform some proof-of-concept experiments.
Oliehoek, Frans
Spaan, Matthijs T. J.
Vlassis, Nikos
Total mark: 5