TY - JOUR
T1 - Surgical scene generation and adversarial networks for physics-based iOCT synthesis
AU - Sommersperger, Michael
AU - Martin-Gomez, Alejandro
AU - Mach, Kristina
AU - Gehlbach, Peter Louis
AU - Nasseri, M. Ali
AU - Iordachita, Iulian
AU - Navab, Nassir
N1 - Funding Information:
National Institutes of Health (1R01EB025883-01A1).
Publisher Copyright:
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
PY - 2022/4/1
Y1 - 2022/4/1
N2 - The development and integration of intraoperative optical coherence tomography (iOCT) into modern operating rooms has motivated novel procedures directed at improving the outcome of ophthalmic surgeries. Although computer-assisted algorithms could further advance such interventions, the limited availability and accessibility of iOCT systems constrains the generation of dedicated data sets. This paper introduces a novel framework combining a virtual setup and deep learning algorithms to generate synthetic iOCT data in a simulated environment. The virtual setup reproduces the geometry of retinal layers extracted from real data and allows the integration of virtual microsurgical instrument models. Our scene rendering approach extracts information from the environment and considers iOCT typical imaging artifacts to generate cross-sectional label maps, which in turn are used to synthesize iOCT B-scans via a generative adversarial network. In our experiments we investigate the similarity between real and synthetic images, show the relevance of using the generated data for image-guided interventions and demonstrate the potential of 3D iOCT data synthesis.
AB - The development and integration of intraoperative optical coherence tomography (iOCT) into modern operating rooms has motivated novel procedures directed at improving the outcome of ophthalmic surgeries. Although computer-assisted algorithms could further advance such interventions, the limited availability and accessibility of iOCT systems constrains the generation of dedicated data sets. This paper introduces a novel framework combining a virtual setup and deep learning algorithms to generate synthetic iOCT data in a simulated environment. The virtual setup reproduces the geometry of retinal layers extracted from real data and allows the integration of virtual microsurgical instrument models. Our scene rendering approach extracts information from the environment and considers iOCT typical imaging artifacts to generate cross-sectional label maps, which in turn are used to synthesize iOCT B-scans via a generative adversarial network. In our experiments we investigate the similarity between real and synthetic images, show the relevance of using the generated data for image-guided interventions and demonstrate the potential of 3D iOCT data synthesis.
UR - http://www.scopus.com/inward/record.url?scp=85128177835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85128177835&partnerID=8YFLogxK
U2 - 10.1364/BOE.454286
DO - 10.1364/BOE.454286
M3 - Article
C2 - 35519277
AN - SCOPUS:85128177835
SN - 2156-7085
VL - 13
SP - 2414
EP - 2430
JO - Biomedical Optics Express
JF - Biomedical Optics Express
IS - 4
ER -