Automated Surgical Activity Recognition with One Labeled Sequence

Robert DiPietro, Gregory D. Hager

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?

Original languageEnglish (US)
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
EditorsDinggang Shen, Pew-Thian Yap, Tianming Liu, Terry M. Peters, Ali Khan, Lawrence H. Staib, Caroline Essert, Sean Zhou
PublisherSpringer
Pages458-466
Number of pages9
ISBN (Print)9783030322533
DOIs
StatePublished - Jan 1 2019
Event22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019 - Shenzhen, China
Duration: Oct 13 2019Oct 17 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11768 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
CountryChina
CityShenzhen
Period10/13/1910/17/19

Fingerprint

Activity Recognition
Surgery
Availability
Robots
Annotation
Robot
Motion
Demonstrate
Training
Community
Learning

Keywords

  • Gesture recognition
  • Maneuver recognition
  • Semi-supervised learning
  • Surgical activity recognition

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

DiPietro, R., & Hager, G. D. (2019). Automated Surgical Activity Recognition with One Labeled Sequence. In D. Shen, P-T. Yap, T. Liu, T. M. Peters, A. Khan, L. H. Staib, C. Essert, ... S. Zhou (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings (pp. 458-466). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11768 LNCS). Springer. https://doi.org/10.1007/978-3-030-32254-0_51

Automated Surgical Activity Recognition with One Labeled Sequence. / DiPietro, Robert; Hager, Gregory D.

Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. ed. / Dinggang Shen; Pew-Thian Yap; Tianming Liu; Terry M. Peters; Ali Khan; Lawrence H. Staib; Caroline Essert; Sean Zhou. Springer, 2019. p. 458-466 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11768 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

DiPietro, R & Hager, GD 2019, Automated Surgical Activity Recognition with One Labeled Sequence. in D Shen, P-T Yap, T Liu, TM Peters, A Khan, LH Staib, C Essert & S Zhou (eds), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11768 LNCS, Springer, pp. 458-466, 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, Shenzhen, China, 10/13/19. https://doi.org/10.1007/978-3-030-32254-0_51
DiPietro R, Hager GD. Automated Surgical Activity Recognition with One Labeled Sequence. In Shen D, Yap P-T, Liu T, Peters TM, Khan A, Staib LH, Essert C, Zhou S, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. Springer. 2019. p. 458-466. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-32254-0_51
DiPietro, Robert ; Hager, Gregory D. / Automated Surgical Activity Recognition with One Labeled Sequence. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings. editor / Dinggang Shen ; Pew-Thian Yap ; Tianming Liu ; Terry M. Peters ; Ali Khan ; Lawrence H. Staib ; Caroline Essert ; Sean Zhou. Springer, 2019. pp. 458-466 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{9505b2222cce4b96b4edc03c88405d1d,
title = "Automated Surgical Activity Recognition with One Labeled Sequence",
abstract = "Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?",
keywords = "Gesture recognition, Maneuver recognition, Semi-supervised learning, Surgical activity recognition",
author = "Robert DiPietro and Hager, {Gregory D.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-32254-0_51",
language = "English (US)",
isbn = "9783030322533",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "458--466",
editor = "Dinggang Shen and Pew-Thian Yap and Tianming Liu and Peters, {Terry M.} and Ali Khan and Staib, {Lawrence H.} and Caroline Essert and Sean Zhou",
booktitle = "Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings",

}

TY - GEN

T1 - Automated Surgical Activity Recognition with One Labeled Sequence

AU - DiPietro, Robert

AU - Hager, Gregory D.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?

AB - Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?

KW - Gesture recognition

KW - Maneuver recognition

KW - Semi-supervised learning

KW - Surgical activity recognition

UR - http://www.scopus.com/inward/record.url?scp=85075662736&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85075662736&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-32254-0_51

DO - 10.1007/978-3-030-32254-0_51

M3 - Conference contribution

AN - SCOPUS:85075662736

SN - 9783030322533

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 458

EP - 466

BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings

A2 - Shen, Dinggang

A2 - Yap, Pew-Thian

A2 - Liu, Tianming

A2 - Peters, Terry M.

A2 - Khan, Ali

A2 - Staib, Lawrence H.

A2 - Essert, Caroline

A2 - Zhou, Sean

PB - Springer

ER -