Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance

R. Han, C. K. Jones, J. Lee, P. Wu, P. Vagdargi, A. Uneri, P. A. Helm, M. Luciano, W. S. Anderson, J. H. Siewerdsen

Research output: Contribution to journalArticlepeer-review

Abstract

Purpose: The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue – e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. Method: The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. Results: The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. Conclusion: The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.

Original languageEnglish (US)
Article number102292
JournalMedical image analysis
Volume75
DOIs
StatePublished - Jan 2022

Keywords

  • Deformable registration
  • Image synthesis
  • Inter-modality registration
  • Unsupervised learning

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Health Informatics
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance'. Together they form a unique fingerprint.

Cite this