Abstract
A common assumption for the study of reinforcement learning of coordination is that agents can observe each other's actions (so-called joint-action learning). We present in this paper a number of simple joint-action learning algorithms and show that they perform very well when compared against more complex approaches such as OAL [1], while still maintaining convergence guarantees. Based on the empirical results, we argue that these simple algorithms should be used as baselines for any future research on joint-action learning of coordination.
Original language | English |
---|---|
Title of host publication | ADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II |
Editors | D Kudenko, D Kazakov, E Alonso |
Place of Publication | BERLIN |
Publisher | Springer |
Pages | 55-72 |
Number of pages | 18 |
ISBN (Print) | 3-540-25260-6 |
Publication status | Published - 2005 |
Event | 4th Symposium on Adaptive Agents and Multi-Agent Systems - Leeds Duration: 29 Mar 2004 → 30 Mar 2004 |
Conference
Conference | 4th Symposium on Adaptive Agents and Multi-Agent Systems |
---|---|
City | Leeds |
Period | 29/03/04 → 30/03/04 |