Surgical gesture segmentation and recognition.

Lingling Tao, Luca Zappella, Gregory Hager, René Vidal

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

Original languageEnglish (US)
Title of host publicationMedical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Pages339-346
Number of pages8
Volume16
EditionPt 3
StatePublished - 2013

Fingerprint

Gestures
Biomechanical Phenomena
Cues
Robotics
Recognition (Psychology)
Joints

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Tao, L., Zappella, L., Hager, G., & Vidal, R. (2013). Surgical gesture segmentation and recognition. In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention (Pt 3 ed., Vol. 16, pp. 339-346)

Surgical gesture segmentation and recognition. / Tao, Lingling; Zappella, Luca; Hager, Gregory; Vidal, René.

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Vol. 16 Pt 3. ed. 2013. p. 339-346.

Research output: Chapter in Book/Report/Conference proceedingChapter

Tao, L, Zappella, L, Hager, G & Vidal, R 2013, Surgical gesture segmentation and recognition. in Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Pt 3 edn, vol. 16, pp. 339-346.
Tao L, Zappella L, Hager G, Vidal R. Surgical gesture segmentation and recognition. In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Pt 3 ed. Vol. 16. 2013. p. 339-346
Tao, Lingling ; Zappella, Luca ; Hager, Gregory ; Vidal, René. / Surgical gesture segmentation and recognition. Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. Vol. 16 Pt 3. ed. 2013. pp. 339-346
@inbook{549272fc7a7544aaa087f5477cf1330c,
title = "Surgical gesture segmentation and recognition.",
abstract = "Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.",
author = "Lingling Tao and Luca Zappella and Gregory Hager and Ren{\'e} Vidal",
year = "2013",
language = "English (US)",
volume = "16",
pages = "339--346",
booktitle = "Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention",
edition = "Pt 3",

}

TY - CHAP

T1 - Surgical gesture segmentation and recognition.

AU - Tao, Lingling

AU - Zappella, Luca

AU - Hager, Gregory

AU - Vidal, René

PY - 2013

Y1 - 2013

N2 - Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

AB - Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

UR - http://www.scopus.com/inward/record.url?scp=84894606700&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84894606700&partnerID=8YFLogxK

M3 - Chapter

C2 - 24505779

AN - SCOPUS:84894606700

VL - 16

SP - 339

EP - 346

BT - Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention

ER -