Multigrid Reinforcement Learning with Reward Shaping

Research output: Contribution to conferencePaper

Author(s)

Department/unit(s)

Publication details

DatePublished - 2008
Original languageUndefined/Unknown

Abstract

Potential-based reward shaping has been shown to be a powerful method to improve the convergence rate of reinforcement learning agents. It is a flexible technique to incorporate background knowledge into temporal-difference learning in a principled way. However, the question remains how to compute the potential which is used to shape the reward that is given to the learning agent. In this paper we propose a way to solve this problem in reinforcement learning with state space discretisation. In particular, we show that the potential function can be learned online in parallel with the actual reinforcement learning process. If the Q-function is learned for states determined by a given grid, a V-function for states with lower resolution can be learned in parallel and used to approximate the potential for ground learning. The novel algorithm is presented and experimentally evaluated.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations