Crowdsourcing annotation of surgical instruments in videos of cataract surgery

Tae Soo Kim, Anand Malpani, Austin Reiter, Gregory D. Hager, Shameema Sikder, S. Swaroop Vedula

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Automating objective assessment of surgical technical skill is necessary to support training and professional certification at scale, even in settings with limited access to an expert surgeon. Likewise, automated surgical activity recognition can improve operating room workflow efficiency, teaching and self-review, and aid clinical decision support systems. However, current supervised learning methods to do so, rely on large training datasets. Crowdsourcing has become a standard in curating such large training datasets in a scalable manner. The use of crowdsourcing in surgical data annotation and its effectiveness has been studied only in a few settings. In this study, we evaluated reliability and validity of crowdsourced annotations for information on surgical instruments (name of instruments and pixel location of key points on instruments). For 200 images sampled from videos of two cataract surgery procedures, we collected 9 independent annotations per image. We observed an inter-rater agreement of 0.63 (Fleiss’ kappa), and an accuracy of 0.88 for identification of instruments compared against an expert annotation. We obtained a mean pixel error of 5.77 pixels for annotation of instrument tip key points. Our study shows that crowdsourcing is a reliable and accurate alternative to expert annotations to identify instruments and instrument tip key points in videos of cataract surgery.

Original languageEnglish (US)
Title of host publicationIntravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis - 7th Joint International Workshop, CVII-STENT 2018 and Third International Workshop, LABELS 2018 Held in Conjunction with MICCAI 2018
EditorsSu-Lin Lee, Emanuele Trucco, Lena Maier-Hein, Stefano Moriconi, Shadi Albarqouni, Pierre Jannin, Simone Balocco, Guillaume Zahnd, Diana Mateus, Zeike Taylor, Stefanie Demirci, Danail Stoyanov, Raphael Sznitman, Anne Martel, Veronika Cheplygina, Eric Granger, Luc Duong
PublisherSpringer Verlag
Pages121-130
Number of pages10
ISBN (Print)9783030013639
DOIs
StatePublished - 2018
Event7th Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2018, and the 3rd International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2018, held in conjunction with the 21th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018 - Granada, Spain
Duration: Sep 16 2018Sep 16 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11043 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other7th Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2018, and the 3rd International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2018, held in conjunction with the 21th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018
Country/TerritorySpain
CityGranada
Period9/16/189/16/18

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Crowdsourcing annotation of surgical instruments in videos of cataract surgery'. Together they form a unique fingerprint.

Cite this