Abstract
Most approaches to the learning of coordination in multi-agent systems (MAS) to date require all agents to use the same learning algorithm with similar (or even the same) parameter settings. In today's open networks and high inter-connectivity such an assumption becomes increasingly unrealistic. Developers are starting to have less control over the agents that join the system and the learning algorithms they employ. This makes effective coordination and good learning performance extremely difficult to achieve, especially in the absence of learning agent standards. In this paper we investigate the problem of learning to coordinate with heterogeneous agents. We show that an agent employing the FMQ algorithm, a recently developed multi-agent learning method, has the ability to converge towards the optimal joint action when teamed-up with one or more simple Q-learners. Specifically, we show such convergence in scenarios where simple Q-learners alone are unable to converge towards an optimum. Our results show that system designers may improve learning and coordination performance by adding a "smart" agent to the MAS.
Original language | English |
---|---|
Title of host publication | ADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II |
Editors | D Kudenko, D Kazakov, E Alonso |
Place of Publication | BERLIN |
Publisher | Springer |
Pages | 119-131 |
Number of pages | 13 |
ISBN (Print) | 3-540-25260-6 |
Publication status | Published - 2005 |
Event | 4th Symposium on Adaptive Agents and Multi-Agent Systems - Leeds Duration: 29 Mar 2004 → 30 Mar 2004 |
Conference
Conference | 4th Symposium on Adaptive Agents and Multi-Agent Systems |
---|---|
City | Leeds |
Period | 29/03/04 → 30/03/04 |