An Empirical Analysis of the Impact of Prioritised Sweeping on the DynaQ's Performance

Marek Grzes, Daniel Kudenko

Research output: Contribution to conferencePaperpeer-review

Abstract

Reinforcement learning tackles the problem of how to act optimally given observations of the current world state. Agents that learn from reinforcements execute actions in an environment and receive feedback (reward) that can be used to guide the learning process. The distinguishing feature of reinforcement learning is that the model of the environment (i.e., effects of actions or the reward function) are not known in advance. Model-based approaches represent a class of reinforcement learning algorithms which learn the model of dynamics. This model can be used by the learning agent to simulate interactions with the environment. DynaQ and its extended version with prioritised sweeping are the most popular examples of model-based approaches. This paper shows that, contrary to common belief, DynaQ with prioritised sweeping may perform worse than pure DynaQ in domains where the agent can be easily misled by a sub-optimal solution.
Original languageUndefined/Unknown
Pages1041-1051
DOIs
Publication statusPublished - 2008

Cite this