Abstract
Difference rewards and potential-based reward shaping can both significantly improve the joint policy learnt by multiple reinforcement learning agents acting simultaneously in the same environment. Difference rewards capture an agent's contribution to the system's performance. Potential-based reward shaping has been proven to not alter the Nash equilibria of the system but requires domain-specific knowledge. This paper introduces two novel reward functions that combine these methods to leverage the benefits of both. Using the difference reward's Counterfactual as Potential (CaP) allows the application of potential-based reward shaping to a wide range of multiagent systems without the need for domain specific knowledge whilst still maintaining the theoretical guarantee of consistent Nash equilibria. Alternatively, Difference Rewards incorporating Potential-Based Reward Shaping (DRiP) uses potential-based reward shaping to further shape difference rewards. By exploiting prior knowledge of a problem domain, this paper demonstrates agents using this approach can converge either up to 23.8 times faster than or to joint policies up to 196% better than agents using difference rewards alone.
Original language | English |
---|---|
Title of host publication | 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014 |
Publisher | International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) |
Pages | 165-172 |
Number of pages | 8 |
Volume | 1 |
ISBN (Electronic) | 9781634391313 |
Publication status | Published - 2014 |
Event | 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014 - Paris, France Duration: 5 May 2014 → 9 May 2014 |
Conference
Conference | 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014 |
---|---|
Country/Territory | France |
City | Paris |
Period | 5/05/14 → 9/05/14 |
Keywords
- Multiagent reinforcement learning
- Reward shaping