Abstract
We report on an investigation of reinforcement learning techniques for the learning of coordination in cooperative multi-agent systems. Specifically, we focus on a novel action selection strategy for Q-learning (Watkins 1989). The new technique is applicable to scenarios where mutual observation of actions is not possible.
To date, reinforcement learning approaches for such independent agents did not guarantee convergence to the optimal joint action in scenarios with high miscoordination costs. We improve on previous results (Claus & Boutilier 1998) by, demonstrating empirically that our extension causes the agents to converge almost always to the optimal joint action even in these difficult cases.
Original language | English |
---|---|
Title of host publication | EIGHTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-02)/FOURTEENTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE (IAAI-02), PROCEEDINGS |
Place of Publication | CAMBRIDGE |
Publisher | MIT Press |
Pages | 326-331 |
Number of pages | 6 |
ISBN (Print) | 0-262-51129-0 |
Publication status | Published - 2002 |
Event | AAAI-02 - Edmonton, Alberta, Canada Duration: 28 Jul 2002 → 1 Aug 2002 |
Conference
Conference | AAAI-02 |
---|---|
Country/Territory | Canada |
City | Edmonton, Alberta |
Period | 28/07/02 → 1/08/02 |