Learning to coordinate using commitment sequences in cooperative multi-agent systems

S Kapetanakis, D Kudenko, M J A Strens

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We report on an investigation of the learning of coordination in cooperative multi-agent systems. Specifically, we study solutions that are applicable to independent agents i.e. agents that do not observe one another's actions. In previous research [5] we have presented a reinforcement learning approach that converges to the optimal joint action even in scenarios with high miscoordination costs. However, this approach failed in fully stochastic environments. In this paper, we present a novel approach based on reward estimation with a shared action-selection protocol. The new technique is applicable in fully stochastic environments where mutual observation of actions is not possible. We demonstrate empirically that our approach causes the agents to converge almost always to the optimal joint action even in difficult stochastic scenarios with high miscoordination penalties.

Original languageEnglish
Title of host publicationADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II
EditorsD Kudenko, D Kazakov, E Alonso
Place of PublicationBERLIN
PublisherSpringer
Pages106-118
Number of pages13
ISBN (Print)3-540-25260-6
Publication statusPublished - 2005
Event4th Symposium on Adaptive Agents and Multi-Agent Systems - Leeds
Duration: 29 Mar 200430 Mar 2004

Conference

Conference4th Symposium on Adaptive Agents and Multi-Agent Systems
CityLeeds
Period29/03/0430/03/04

Cite this