By the same authors

From the same journal

Overcoming incorrect knowledge in plan-based reward shaping

Research output: Contribution to journalArticle

Published copy (DOI)

Author(s)

Department/unit(s)

Publication details

JournalThe Knowledge Engineering Review
DatePublished - 11 Feb 2016
Issue number1
Volume31
Number of pages13
Pages (from-to)31-43
Original languageEnglish

Abstract

Reward shaping has been shown to significantly improve an agent's performance in reinforcement learning. Plan-based reward shaping is a successful approach in which a STRIPS plan is used in order to guide the agent to the optimal behaviour. However, if the provided knowledge is wrong, it has been shown the agent will take longer to learn the optimal policy. Previously, in some cases, it was better to ignore all prior knowledge despite it only being partially incorrect. This paper introduces a novel use of knowledge revision to overcome incorrect domain knowledge when provided to an agent receiving plan-based reward shaping. Empirical results show that an agent using this method can outperform the previous agent receiving plan-based reward shaping without knowledge revision.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations