A physics-guided modular deep-learning based automated framework for tumor segmentation in PET images

Kevin H. Leung, Wael Marashdeh, Rick Wray, Saeed Ashrafinia, Martin G. Pomper, Arman Rahmim, Abhinav Kumar Jha

Research output: Contribution to journalArticlepeer-review

Abstract

An important need exists for reliable PET tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. The purpose of this study was to develop an automated physics-guided deep-learning-based PET tumor-segmentation framework that addresses the challenges of limited spatial resolution, high image noise, and lack of clinical training data with ground-truth tumor boundaries in PET imaging. We propose a three-module PET-segmentation framework in the context of segmenting primary tumors in 3D 18F-fluorodeoxyglucose (FDG)-PET images of patients with lung cancer on a per-slice basis. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The framework's accuracy, generalizability to different scanners, sensitivity to partial volume effects (PVEs) and efficacy in reducing the number of training images were quantitatively evaluated using Dice similarity coefficient (DSC) and several other metrics. The framework yielded reliable performance in both simulated (DSC: 0.87 (95% CI: 0.86, 0.88)) and patient images (DSC: 0.73 (95% CI: 0.71, 0.76)), outperformed several widely used semi-automated approaches, accurately segmented relatively small tumors (smallest segmented cross-section was 1.83 cm2), generalized across five PET scanners (DSC: 0.74 (95% CI: 0.71, 0.76)), was relatively unaffected by PVEs, and required low training data (training with data from even 30 patients yielded DSC of 0.70 (95% CI: 0.68, 0.71)). A modular deep-learning-based framework yielded reliable automated tumor delineation in FDG-PET images of patients with lung cancer using a small-sized clinical training dataset, generalized across scanners, and demonstrated ability to segment small tumors.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Feb 18 2020

Keywords

  • Automated segmentation
  • Deep learning
  • Oncology
  • Partial volume effects
  • PET

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'A physics-guided modular deep-learning based automated framework for tumor segmentation in PET images'. Together they form a unique fingerprint.

Cite this