Improving Optimistic Exploration in Model-Free Reinforcement Learning

Marek Grzes, Daniel Kudenko

Research output: Contribution to conferencePaperpeer-review

Abstract

The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a popular RL strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.
Original languageUndefined/Unknown
Pages360-369
DOIs
Publication statusPublished - 2009

Cite this