TY - GEN
T1 - Self domain adapted network
AU - He, Yufan
AU - Carass, Aaron
AU - Zuo, Lianrui
AU - Dewey, Blake E.
AU - Prince, Jerry L.
N1 - Funding Information:
Acknowledgments. This work is supported by NIH grants R01-EY024655 (PI: J.L. Prince), R01-NS082347 (PI: P.A. Calabresi) and in part by the Intramural research Program of the NIH, National Institute on Aging.
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Domain shift is a major problem for deploying deep networks in clinical practice. Network performance drops significantly with (target) images obtained differently than its (source) training data. Due to a lack of target label data, most work has focused on unsupervised domain adaptation (UDA). Current UDA methods need both source and target data to train models which perform image translation (harmonization) or learn domain-invariant features. However, training a model for each target domain is time consuming and computationally expensive, even infeasible when target domain data are scarce or source data are unavailable due to data privacy. In this paper, we propose a novel self domain adapted network (SDA-Net) that can rapidly adapt itself to a single test subject at the testing stage, without using extra data or training a UDA model. The SDA-Net consists of three parts: adaptors, task model, and auto-encoders. The latter two are pre-trained offline on labeled source images. The task model performs tasks like synthesis, segmentation, or classification, which may suffer from the domain shift problem. At the testing stage, the adaptors are trained to transform the input test image and features to reduce the domain shift as measured by the auto-encoders, and thus perform domain adaptation. We validated our method on retinal layer segmentation from different OCT scanners and T1 to T2 synthesis with T1 from different MRI scanners and with different imaging parameters. Results show that our SDA-Net, with a single test subject and a short amount of time for self adaptation at the testing stage, can achieve significant improvements.
AB - Domain shift is a major problem for deploying deep networks in clinical practice. Network performance drops significantly with (target) images obtained differently than its (source) training data. Due to a lack of target label data, most work has focused on unsupervised domain adaptation (UDA). Current UDA methods need both source and target data to train models which perform image translation (harmonization) or learn domain-invariant features. However, training a model for each target domain is time consuming and computationally expensive, even infeasible when target domain data are scarce or source data are unavailable due to data privacy. In this paper, we propose a novel self domain adapted network (SDA-Net) that can rapidly adapt itself to a single test subject at the testing stage, without using extra data or training a UDA model. The SDA-Net consists of three parts: adaptors, task model, and auto-encoders. The latter two are pre-trained offline on labeled source images. The task model performs tasks like synthesis, segmentation, or classification, which may suffer from the domain shift problem. At the testing stage, the adaptors are trained to transform the input test image and features to reduce the domain shift as measured by the auto-encoders, and thus perform domain adaptation. We validated our method on retinal layer segmentation from different OCT scanners and T1 to T2 synthesis with T1 from different MRI scanners and with different imaging parameters. Results show that our SDA-Net, with a single test subject and a short amount of time for self adaptation at the testing stage, can achieve significant improvements.
KW - Segmentation
KW - Self supervised learning
KW - Synthesis
KW - Unsupervised domain adaptation
UR - http://www.scopus.com/inward/record.url?scp=85093067941&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093067941&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-59710-8_43
DO - 10.1007/978-3-030-59710-8_43
M3 - Conference contribution
AN - SCOPUS:85093067941
SN - 9783030597092
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 437
EP - 446
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
A2 - Martel, Anne L.
A2 - Abolmaesumi, Purang
A2 - Stoyanov, Danail
A2 - Mateus, Diana
A2 - Zuluaga, Maria A.
A2 - Zhou, S. Kevin
A2 - Racoceanu, Daniel
A2 - Joskowicz, Leo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Y2 - 4 October 2020 through 8 October 2020
ER -