Reinforcement learning of coordination in heterogeneous cooperative multi-agent systems

S Kapetanakis, D Kudenko

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Most approaches to the learning of coordination in multi-agent systems (MAS) to date require all agents to use the same learning algorithm with similar (or even the same) parameter settings. In today's open networks and high inter-connectivity such an assumption becomes increasingly unrealistic. Developers are starting to have less control over the agents that join the system and the learning algorithms they employ. This makes effective coordination and good learning performance extremely difficult to achieve, especially in the absence of learning agent standards. In this paper we investigate the problem of learning to coordinate with heterogeneous agents. We show that an agent employing the FMQ algorithm, a recently developed multi-agent learning method, has the ability to converge towards the optimal joint action when teamed-up with one or more simple Q-learners. Specifically, we show such convergence in scenarios where simple Q-learners alone are unable to converge towards an optimum. Our results show that system designers may improve learning and coordination performance by adding a "smart" agent to the MAS.

Original languageEnglish
Title of host publicationADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II
EditorsD Kudenko, D Kazakov, E Alonso
Place of PublicationBERLIN
PublisherSpringer
Pages119-131
Number of pages13
ISBN (Print)3-540-25260-6
Publication statusPublished - 2005
Event4th Symposium on Adaptive Agents and Multi-Agent Systems - Leeds
Duration: 29 Mar 200430 Mar 2004

Conference

Conference4th Symposium on Adaptive Agents and Multi-Agent Systems
CityLeeds
Period29/03/0430/03/04

Cite this