TY - GEN
T1 - A deep learning-based approach to identify in vivo catheter tips during photoacoustic-guided cardiac interventions
AU - Allman, Derek
AU - Assis, Fabrizio
AU - Chrispin, Jonathan
AU - Lediju Bell, Muyinatu A.
N1 - Funding Information:
This work is partially supported by NIH Trailblazer Award R21-EB025621 and NSF CAREER Award 1751522.
Publisher Copyright:
© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
PY - 2019
Y1 - 2019
N2 - Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, our group is exploring photoacoustic imaging in conjunc- tion with robotic visual servoing, which requires segmentation of catheter tips. However, typical segmentation algorithms are susceptible to reflection artifacts. To address this challenge, signal sources can be identified in the presence of reflection artifacts using a deep neural network, as we previously demonstrated with a linear array ultrasound transducer. This paper extends our previous work to detect photoacoustic sources received by a phased array transducer, which is more common in cardiac applications. We trained a convolutional neural network (CNN) with simulated photoacoustic channel data to identify point sources. The network was tested with an independent simulated validation data set not included during training as well as in vivo data acquired during a pig catheterization procedure. When tested on the independent simulated validation data set, the CNN correctly classified 84.2% of sources with a misclassification rate of 0.01%, and the mean absolute location error of correctly classified sources was 0.095 mm and 0.462 mm in the axial and lateral dimensions, respectively. When applied to in vivo data, the network correctly classified 91.4% of sources with a 7.86% misclassification rate. These results indicate that a CNN is capable of identifying photoacoustic sources recorded by phased array transducers, which is promising for cardiac applications.
AB - Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, our group is exploring photoacoustic imaging in conjunc- tion with robotic visual servoing, which requires segmentation of catheter tips. However, typical segmentation algorithms are susceptible to reflection artifacts. To address this challenge, signal sources can be identified in the presence of reflection artifacts using a deep neural network, as we previously demonstrated with a linear array ultrasound transducer. This paper extends our previous work to detect photoacoustic sources received by a phased array transducer, which is more common in cardiac applications. We trained a convolutional neural network (CNN) with simulated photoacoustic channel data to identify point sources. The network was tested with an independent simulated validation data set not included during training as well as in vivo data acquired during a pig catheterization procedure. When tested on the independent simulated validation data set, the CNN correctly classified 84.2% of sources with a misclassification rate of 0.01%, and the mean absolute location error of correctly classified sources was 0.095 mm and 0.462 mm in the axial and lateral dimensions, respectively. When applied to in vivo data, the network correctly classified 91.4% of sources with a 7.86% misclassification rate. These results indicate that a CNN is capable of identifying photoacoustic sources recorded by phased array transducers, which is promising for cardiac applications.
UR - http://www.scopus.com/inward/record.url?scp=85065189260&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065189260&partnerID=8YFLogxK
U2 - 10.1117/12.2510993
DO - 10.1117/12.2510993
M3 - Conference contribution
AN - SCOPUS:85065189260
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Photons Plus Ultrasound
A2 - Oraevsky, Alexander A.
A2 - Wang, Lihong V.
PB - SPIE
T2 - Photons Plus Ultrasound: Imaging and Sensing 2019
Y2 - 3 February 2019 through 6 February 2019
ER -