Combining neural networks and tree search for task and motion planning in challenging environments

Chris Paxton, Vasumathi Raman, Gregory Hager, Marin Kobilarov

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level 'option policies' that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.

Original languageEnglish (US)
Title of host publicationIROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6059-6066
Number of pages8
Volume2017-September
ISBN (Electronic)9781538626825
DOIs
StatePublished - Dec 13 2017
Event2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017 - Vancouver, Canada
Duration: Sep 24 2017Sep 28 2017

Other

Other2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
CountryCanada
CityVancouver
Period9/24/179/28/17

Fingerprint

Temporal logic
Motion planning
Reinforcement learning
Neural networks
Specifications
Level control
Planning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Paxton, C., Raman, V., Hager, G., & Kobilarov, M. (2017). Combining neural networks and tree search for task and motion planning in challenging environments. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems (Vol. 2017-September, pp. 6059-6066). [8206505] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS.2017.8206505

Combining neural networks and tree search for task and motion planning in challenging environments. / Paxton, Chris; Raman, Vasumathi; Hager, Gregory; Kobilarov, Marin.

IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. p. 6059-6066 8206505.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Paxton, C, Raman, V, Hager, G & Kobilarov, M 2017, Combining neural networks and tree search for task and motion planning in challenging environments. in IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. vol. 2017-September, 8206505, Institute of Electrical and Electronics Engineers Inc., pp. 6059-6066, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Vancouver, Canada, 9/24/17. https://doi.org/10.1109/IROS.2017.8206505
Paxton C, Raman V, Hager G, Kobilarov M. Combining neural networks and tree search for task and motion planning in challenging environments. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September. Institute of Electrical and Electronics Engineers Inc. 2017. p. 6059-6066. 8206505 https://doi.org/10.1109/IROS.2017.8206505
Paxton, Chris ; Raman, Vasumathi ; Hager, Gregory ; Kobilarov, Marin. / Combining neural networks and tree search for task and motion planning in challenging environments. IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. pp. 6059-6066
@inproceedings{61fda000fc5541728761cc50d00b6b1f,
title = "Combining neural networks and tree search for task and motion planning in challenging environments",
abstract = "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level 'option policies' that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.",
author = "Chris Paxton and Vasumathi Raman and Gregory Hager and Marin Kobilarov",
year = "2017",
month = "12",
day = "13",
doi = "10.1109/IROS.2017.8206505",
language = "English (US)",
volume = "2017-September",
pages = "6059--6066",
booktitle = "IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Combining neural networks and tree search for task and motion planning in challenging environments

AU - Paxton, Chris

AU - Raman, Vasumathi

AU - Hager, Gregory

AU - Kobilarov, Marin

PY - 2017/12/13

Y1 - 2017/12/13

N2 - Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level 'option policies' that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.

AB - Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level 'option policies' that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.

UR - http://www.scopus.com/inward/record.url?scp=85041951435&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041951435&partnerID=8YFLogxK

U2 - 10.1109/IROS.2017.8206505

DO - 10.1109/IROS.2017.8206505

M3 - Conference contribution

VL - 2017-September

SP - 6059

EP - 6066

BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems

PB - Institute of Electrical and Electronics Engineers Inc.

ER -