Automated surgical activity recognition with one labeled sequence

Robert DiPietro, Gregory D. Hager

Research output: Contribution to journalArticlepeer-review

Abstract

Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jul 20 2019

Keywords

  • Gesture Recognition
  • Maneuver Recognition
  • Semi-Supervised Learning
  • Surgical Activity Recognition

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Automated surgical activity recognition with one labeled sequence'. Together they form a unique fingerprint.

Cite this