TY - JOUR
T1 - 'Good Robot!'
T2 - Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer
AU - Hundt, Andrew
AU - Killeen, Benjamin
AU - Greene, Nicholas
AU - Wu, Hongtao
AU - Kwon, Heeyeon
AU - Paxton, Chris
AU - Hager, Gregory D.
N1 - Funding Information:
Manuscript received February 24, 2020; accepted July 20, 2020. Date of publication August 11, 2020; date of current version August 27, 2020. This letter was recommended for publication by Associate Editor J. Kober and Editor T. Asfour upon evaluation of the reviewers’. comments. This work was supported by the NSF NRI Awards nos. 1637949 and 1763705, and in part by Office of Naval Research Award N00014-17-1-2124. (Corresponding author: Andrew Hundt.) Andrew Hundt, Benjamin Killeen, Nicholas Greene, Hongtao Wu, Heeyeon Kwon, and Gregory D. Hager are with The Johns Hopkins University, Baltimore, MD 21218 USA (e-mail: ahundt@jhu.edu; killeen@jhu.edu; ngreen29@jhu.edu; hwu67@jhu.edu; hkwon28@jhu.edu; hager@cs.jhu.edu).
Publisher Copyright:
© 2016 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Current Reinforcement Learning (RL) algorithms struggle with long-horizon tasks where time can be wasted exploring dead ends and task progress may be easily reversed. We develop the SPOT framework, which explores within action safety zones, learns about unsafe regions without exploring them, and prioritizes experiences that reverse earlier progress to learn with remarkable efficiency. The SPOT framework successfully completes simulated trials of a variety of tasks, improving a baseline trial success rate from 13% to 100% when stacking 4 cubes, from 13% to 99% when creating rows of 4 cubes, and from 84% to 95% when clearing toys arranged in adversarial patterns. Efficiency with respect to actions per trial typically improves by 30% or more, while training takes just 1-20 k actions, depending on the task. Furthermore, we demonstrate direct sim to real transfer. We are able to create real stacks in 100% of trials with 61% efficiency and real rows in 100% of trials with 59% efficiency by directly loading the simulation-trained model on the real robot with no additional real-world fine-tuning. To our knowledge, this is the first instance of reinforcement learning with successful sim to real transfer applied to long term multi-step tasks such as block-stacking and row-making with consideration of progress reversal. Code is available at https://github.com/jhu-lcsr/good_robot.
AB - Current Reinforcement Learning (RL) algorithms struggle with long-horizon tasks where time can be wasted exploring dead ends and task progress may be easily reversed. We develop the SPOT framework, which explores within action safety zones, learns about unsafe regions without exploring them, and prioritizes experiences that reverse earlier progress to learn with remarkable efficiency. The SPOT framework successfully completes simulated trials of a variety of tasks, improving a baseline trial success rate from 13% to 100% when stacking 4 cubes, from 13% to 99% when creating rows of 4 cubes, and from 84% to 95% when clearing toys arranged in adversarial patterns. Efficiency with respect to actions per trial typically improves by 30% or more, while training takes just 1-20 k actions, depending on the task. Furthermore, we demonstrate direct sim to real transfer. We are able to create real stacks in 100% of trials with 61% efficiency and real rows in 100% of trials with 59% efficiency by directly loading the simulation-trained model on the real robot with no additional real-world fine-tuning. To our knowledge, this is the first instance of reinforcement learning with successful sim to real transfer applied to long term multi-step tasks such as block-stacking and row-making with consideration of progress reversal. Code is available at https://github.com/jhu-lcsr/good_robot.
KW - Computer vision for other robotic applications
KW - deep learning in grasping and manipulation
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85089452039&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089452039&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.3015448
DO - 10.1109/LRA.2020.3015448
M3 - Article
AN - SCOPUS:85089452039
SN - 2377-3766
VL - 5
SP - 6724
EP - 6731
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 9165109
ER -