An incremental approach to learning generalizable robot tasks from human demonstration

Amir M. Ghalamzan E., Chris Paxton, Gregory D. Hager, Luca Bascetta

Research output: Contribution to journalConference articlepeer-review

37 Scopus citations

Abstract

Dynamic Movement Primitives (DMPs) are a common method for learning a control policy for a task from demonstration. This control policy consists of differential equations that can create a smooth trajectory to a new goal point. However, DMPs only have a limited ability to generalize the demonstration to new environments and solve problems such as obstacle avoidance. Moreover, standard DMP learning does not cope with the noise inherent to human demonstrations. Here, we propose an approach for robot learning from demonstration that can generalize noisy task demonstrations to a new goal point and to an environment with obstacles. This strategy for robot learning from demonstration results in a control policy that incorporates different types of learning from demonstration, which correspond to different types of observational learning as outlined in developmental psychology.

Original languageEnglish (US)
Article number7139985
Pages (from-to)5616-5621
Number of pages6
JournalProceedings - IEEE International Conference on Robotics and Automation
Volume2015-June
Issue numberJune
DOIs
StatePublished - Jun 29 2015
Event2015 IEEE International Conference on Robotics and Automation, ICRA 2015 - Seattle, United States
Duration: May 26 2015May 30 2015

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An incremental approach to learning generalizable robot tasks from human demonstration'. Together they form a unique fingerprint.

Cite this