Learning Task-Parameterized Skills From Few Demonstrations

Jihong Zhu*, Michael Gienger, Jens Kober

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Moving away from repetitive tasks, robots nowadays demand versatile skills that adapt to different situations. Task-parameterized learning improves the generalization of motion policies by encoding relevant contextual information in the task parameters, hence enabling flexible task executions. However, training such a policy often requires collecting multiple demonstrations in different situations. To comprehensively create different situations is non-trivial thus renders the method less applicable to real-world problems. Therefore, training with fewer demonstrations/situations is desirable. This paper presents a novel concept to augment the original training dataset with synthetic data for policy improvements, thus allows learning task-parameterized skills with few demonstrations.

Original languageEnglish
Pages (from-to)4063-4070
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number2
Early online date11 Feb 2022
DOIs
Publication statusPublished - 1 Apr 2022

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Imitation learning
  • learning from demonstration
  • physically assistive devices

Cite this