Domain adaptive relational reasoning for 3D multi-organ segmentation

Shuhao Fu, Yongyi Lu, Yan Wang, Yuyin Zhou, Wei Shen, Elliot Fishman, Alan Yuille

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we present a novel unsupervised domain adaptation (UDA) method, named Domain Adaptive Relational Reasoning (DARR), to generalize 3D multi-organ segmentation models to medical data collected from different scanners and/or protocols (domains). Our method is inspired by the fact that the spatial relationship between internal structures in medical images are relatively fixed, e.g., a spleen is always located at the tail of a pancreas, which serves as a latent variable to transfer the knowledge shared across multiple domains. We formulate the spatial relationship by solving a jigsaw puzzle task, i.e., recovering a CT scan from its shuffled patches, and jointly train it with the organ segmentation task. To guarantee the transferability of the learned spatial relationship to multiple domains, we additionally introduce two schemes: 1) Employing a super-resolution network also jointly trained with the segmentation model to standardize medical images from different domain to a certain spatial resolution; 2) Adapting the spatial relationship for a test image by test-time jigsaw puzzle training. Experimental results show that our method improves the performance by 29.60% DSC on target datasets on average without using any data from the target domain during training.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - May 18 2020

Keywords

  • Multi-organ segmentation
  • Relational reasoning
  • Unsupervised domain adaptation

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Domain adaptive relational reasoning for 3D multi-organ segmentation'. Together they form a unique fingerprint.

Cite this