TY - GEN
T1 - Crowdsourcing annotation of surgical instruments in videos of cataract surgery
AU - Kim, Tae Soo
AU - Malpani, Anand
AU - Reiter, Austin
AU - Hager, Gregory D.
AU - Sikder, Shameema
AU - Swaroop Vedula, S.
N1 - Funding Information:
Funding. Wilmer Eye Institute Pooled Professor’s Fund and grant to Wilmer Eye Institute from Research to Prevent Blindness.
Publisher Copyright:
© Springer Nature Switzerland AG 2018.
PY - 2018
Y1 - 2018
N2 - Automating objective assessment of surgical technical skill is necessary to support training and professional certification at scale, even in settings with limited access to an expert surgeon. Likewise, automated surgical activity recognition can improve operating room workflow efficiency, teaching and self-review, and aid clinical decision support systems. However, current supervised learning methods to do so, rely on large training datasets. Crowdsourcing has become a standard in curating such large training datasets in a scalable manner. The use of crowdsourcing in surgical data annotation and its effectiveness has been studied only in a few settings. In this study, we evaluated reliability and validity of crowdsourced annotations for information on surgical instruments (name of instruments and pixel location of key points on instruments). For 200 images sampled from videos of two cataract surgery procedures, we collected 9 independent annotations per image. We observed an inter-rater agreement of 0.63 (Fleiss’ kappa), and an accuracy of 0.88 for identification of instruments compared against an expert annotation. We obtained a mean pixel error of 5.77 pixels for annotation of instrument tip key points. Our study shows that crowdsourcing is a reliable and accurate alternative to expert annotations to identify instruments and instrument tip key points in videos of cataract surgery.
AB - Automating objective assessment of surgical technical skill is necessary to support training and professional certification at scale, even in settings with limited access to an expert surgeon. Likewise, automated surgical activity recognition can improve operating room workflow efficiency, teaching and self-review, and aid clinical decision support systems. However, current supervised learning methods to do so, rely on large training datasets. Crowdsourcing has become a standard in curating such large training datasets in a scalable manner. The use of crowdsourcing in surgical data annotation and its effectiveness has been studied only in a few settings. In this study, we evaluated reliability and validity of crowdsourced annotations for information on surgical instruments (name of instruments and pixel location of key points on instruments). For 200 images sampled from videos of two cataract surgery procedures, we collected 9 independent annotations per image. We observed an inter-rater agreement of 0.63 (Fleiss’ kappa), and an accuracy of 0.88 for identification of instruments compared against an expert annotation. We obtained a mean pixel error of 5.77 pixels for annotation of instrument tip key points. Our study shows that crowdsourcing is a reliable and accurate alternative to expert annotations to identify instruments and instrument tip key points in videos of cataract surgery.
UR - http://www.scopus.com/inward/record.url?scp=85055803876&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055803876&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-01364-6_14
DO - 10.1007/978-3-030-01364-6_14
M3 - Conference contribution
AN - SCOPUS:85055803876
SN - 9783030013639
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 121
EP - 130
BT - Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis - 7th Joint International Workshop, CVII-STENT 2018 and Third International Workshop, LABELS 2018 Held in Conjunction with MICCAI 2018
A2 - Lee, Su-Lin
A2 - Trucco, Emanuele
A2 - Maier-Hein, Lena
A2 - Moriconi, Stefano
A2 - Albarqouni, Shadi
A2 - Jannin, Pierre
A2 - Balocco, Simone
A2 - Zahnd, Guillaume
A2 - Mateus, Diana
A2 - Taylor, Zeike
A2 - Demirci, Stefanie
A2 - Stoyanov, Danail
A2 - Sznitman, Raphael
A2 - Martel, Anne
A2 - Cheplygina, Veronika
A2 - Granger, Eric
A2 - Duong, Luc
PB - Springer Verlag
T2 - 7th Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2018, and the 3rd International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2018, held in conjunction with the 21th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018
Y2 - 16 September 2018 through 16 September 2018
ER -