Abstract
The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a. popular RI strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.
Original language | English |
---|---|
Title of host publication | ADAPTIVE AND NATURAL COMPUTING ALGORITHMS |
Editors | M Kolehmainen, P Toivanen, B Beliczynski |
Place of Publication | BERLIN |
Publisher | Springer |
Pages | 360-369 |
Number of pages | 10 |
Volume | 5495 LNCS |
ISBN (Print) | 978-3-642-04920-0 |
Publication status | Published - 2009 |
Event | 9th International Conference on Adaptive and Natural Computing Algorithms (ICANNGA) - Kuopio Duration: 23 Apr 2009 → 25 Apr 2009 |
Conference
Conference | 9th International Conference on Adaptive and Natural Computing Algorithms (ICANNGA) |
---|---|
City | Kuopio |
Period | 23/04/09 → 25/04/09 |