Improving Optimistic Exploration in Model-Free Reinforcement Learning

Research output: Contribution to conferencePaper

Author(s)

Department/unit(s)

Publication details

DatePublished - 2009
Original languageUndefined/Unknown

Abstract

The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a popular RL strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations