Data-driven detection and registration of spine surgery instrumentation in intraoperative images

S. A. Doerr, A. Uneri, Y. Huang, C. K. Jones, X. Zhang, M. D. Ketcha, P. A. Helm, J. H. Siewerdsen

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Purpose. Conventional model-based 3D-2D registration algorithms can be challenged by limited capture range, model validity, and stringent intraoperative runtime requirements. In this work, a deep convolutional neural network was used to provide robust initialization of a registration algorithm (known-component registration, KC-Reg) for 3D localization of spine surgery implants, combining the speed and global support of data-driven approaches with the previously demonstrated accuracy of model-based registration. Methods. The approach uses a Faster R-CNN architecture to detect and localize a broad variety and orientation of spinal pedicle screws in clinical images. Training data were generated using projections from 17 clinical cone-beam CT scans and a library of screw models to simulate implants. Network output was processed to provide screw count and 2D poses. The network was tested on two test datasets of 2,000 images, each depicting real anatomy and realistic spine surgery instrumentation - one dataset involving the same patient data as in the training set (but with different screws, poses, image noise, and affine transformations) and one dataset with five patients unseen in the test data. Assessment of device detection was quantified in terms of accuracy and specificity, and localization accuracy was evaluated in terms of intersection-overunion (IOU) and distance between true and predicted bounding box coordinates. Results. The overall accuracy of pedicle screw detection was ∼86.6% (85.3% for the same-patient dataset and 87.8% for the many-patient dataset), suggesting that the screw detection network performed reasonably well irrespective of disparate, complex anatomical backgrounds. The precision of screw detection was ∼92.6% (95.0% and 90.2% for the respective same-patient and many-patient datasets). The accuracy of screw localization was within 1.5 mm (median difference of bounding box coordinates), and median IOU exceeded 0.85. For purposes of initializing a 3D-2D registration algorithm, the accuracy was observed to be well within the typical capture range of KC-Reg.1 Conclusions. Initial evaluation of network performance indicates sufficient accuracy to integrate with algorithms for implant registration, guidance, and verification in spine surgery. Such capability is of potential use in surgical navigation, robotic assistance, and data-intensive analysis of implant placement in large retrospective datasets. Future work includes correspondence of multiple views, 3D localization, screw classification, and expansion of the training dataset to a broader variety of anatomical sites, number of screws, and types of implants.

Original languageEnglish (US)
Title of host publicationMedical Imaging 2020
Subtitle of host publicationImage-Guided Procedures, Robotic Interventions, and Modeling
EditorsBaowei Fei, Cristian A. Linte
ISBN (Electronic)9781510633971
StatePublished - 2020
EventMedical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling - Houston, United States
Duration: Feb 16 2020Feb 19 2020

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X


ConferenceMedical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling
Country/TerritoryUnited States


  • Deep learning
  • Image registration
  • Image-guided surgery
  • Intraoperative imaging
  • Spine surgery

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Data-driven detection and registration of spine surgery instrumentation in intraoperative images'. Together they form a unique fingerprint.

Cite this