Improving Optimistic Exploration in Model-Free Reinforcement Learning

Marek Grzes, Daniel Kudenko

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a. popular RI strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.

Original languageEnglish
Title of host publicationADAPTIVE AND NATURAL COMPUTING ALGORITHMS
EditorsM Kolehmainen, P Toivanen, B Beliczynski
Place of PublicationBERLIN
PublisherSpringer
Pages360-369
Number of pages10
Volume5495 LNCS
ISBN (Print)978-3-642-04920-0
Publication statusPublished - 2009
Event9th International Conference on Adaptive and Natural Computing Algorithms (ICANNGA) - Kuopio
Duration: 23 Apr 200925 Apr 2009

Conference

Conference9th International Conference on Adaptive and Natural Computing Algorithms (ICANNGA)
CityKuopio
Period23/04/0925/04/09

Cite this