By the same authors

Potential-based difference rewards for multiagent reinforcement learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publication13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014
DatePublished - 2014
Pages165-172
Number of pages8
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Volume1
Original languageEnglish
ISBN (Electronic)9781634391313

Abstract

Difference rewards and potential-based reward shaping can both significantly improve the joint policy learnt by multiple reinforcement learning agents acting simultaneously in the same environment. Difference rewards capture an agent's contribution to the system's performance. Potential-based reward shaping has been proven to not alter the Nash equilibria of the system but requires domain-specific knowledge. This paper introduces two novel reward functions that combine these methods to leverage the benefits of both. Using the difference reward's Counterfactual as Potential (CaP) allows the application of potential-based reward shaping to a wide range of multiagent systems without the need for domain specific knowledge whilst still maintaining the theoretical guarantee of consistent Nash equilibria. Alternatively, Difference Rewards incorporating Potential-Based Reward Shaping (DRiP) uses potential-based reward shaping to further shape difference rewards. By exploiting prior knowledge of a problem domain, this paper demonstrates agents using this approach can converge either up to 23.8 times faster than or to joint policies up to 196% better than agents using difference rewards alone.

    Research areas

  • Multiagent reinforcement learning, Reward shaping

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations