Visual robot task planning

Chris Paxton, Yotam Barnoy, Kapil Katyal, Raman Arora, Gregory D. Hager

Research output: Contribution to journalArticlepeer-review

Abstract

Prospection, the act of predicting the consequences of many possible futures, is intrinsic to human planning and action, and may even be at the root of consciousness. Surprisingly, this idea has been explored comparatively little in robotics. In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment. Models are trained via imitation learning on a variety of domains, including navigation, pick-and-place, and a surgical robotics task. Our approach allows us to visualize intermediate motion goals and learn to plan complex activity from visual information.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Mar 30 2018

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Visual robot task planning'. Together they form a unique fingerprint.

Cite this