Baselines for joint-action reinforcement learning of coordination in cooperative multi-agent systems

M Carpenter, D Kudenko

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A common assumption for the study of reinforcement learning of coordination is that agents can observe each other's actions (so-called joint-action learning). We present in this paper a number of simple joint-action learning algorithms and show that they perform very well when compared against more complex approaches such as OAL [1], while still maintaining convergence guarantees. Based on the empirical results, we argue that these simple algorithms should be used as baselines for any future research on joint-action learning of coordination.

Original languageEnglish
Title of host publicationADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II
EditorsD Kudenko, D Kazakov, E Alonso
Place of PublicationBERLIN
PublisherSpringer
Pages55-72
Number of pages18
ISBN (Print)3-540-25260-6
Publication statusPublished - 2005
Event4th Symposium on Adaptive Agents and Multi-Agent Systems - Leeds
Duration: 29 Mar 200430 Mar 2004

Conference

Conference4th Symposium on Adaptive Agents and Multi-Agent Systems
CityLeeds
Period29/03/0430/03/04

Cite this