Dependable Neural Networks for Safety Critical Tasks

Molly O’Brien, William Goble, Greg Hager, Julia Bukowski

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Neural Networks are being integrated into safety critical systems, e.g., perception systems for autonomous vehicles, which require trained networks to perform safely in novel scenarios. It is challenging to verify neural networks because their decisions are not explainable, they cannot be exhaustively tested, and finite test samples cannot capture the variation across all operating conditions. Existing work seeks to train models robust to new scenarios via domain adaptation, style transfer, or few-shot learning. But these techniques fail to predict how a trained model will perform when the operating conditions differ from the testing conditions. We propose a metric, Machine Learning (ML) Dependability, that measures the network’s probability of success in specified operating conditions which need not be the testing conditions. In addition, we propose the metrics Task Undependability and Harmful Undependability to distinguish network failures by their consequences. We evaluate the performance of a Neural Network agent trained using Reinforcement Learning in a simulated robot manipulation task. Our results demonstrate that we can accurately predict the ML Dependability, Task Undependability, and Harmful Undependability for operating conditions that are significantly different from the testing conditions. Finally, we design a Safety Function, using harmful failures identified during testing, that reduces harmful failures, in one example, by a factor of 700 while maintaining a high probability of success.

Original languageEnglish (US)
Title of host publicationEngineering Dependable and Secure Machine Learning Systems - Third International Workshop, EDSMLS 2020, Revised Selected Papers
EditorsOnn Shehory, Eitan Farchi, Guy Barash
PublisherSpringer Science and Business Media Deutschland GmbH
Pages126-140
Number of pages15
ISBN (Print)9783030621438
DOIs
StatePublished - 2020
Event3rd International Workshop on Engineering Dependable and Secure Machine Learning Systems, EDSMLS 2020 - New York City, United States
Duration: Feb 7 2020Feb 7 2020

Publication series

NameCommunications in Computer and Information Science
Volume1272
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference3rd International Workshop on Engineering Dependable and Secure Machine Learning Systems, EDSMLS 2020
Country/TerritoryUnited States
CityNew York City
Period2/7/202/7/20

Keywords

  • Machine learning testing and quality
  • Neural network dependability
  • Neural network safety
  • Reinforcement Learning

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics

Fingerprint

Dive into the research topics of 'Dependable Neural Networks for Safety Critical Tasks'. Together they form a unique fingerprint.

Cite this