Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning

Paul H. Yi, Tae Kyung Kim, Jinchi Wei, Jiwon Shin, Ferdinand Hui, Haris Sair, Gregory Hager, Jan Fritz

Research output: Contribution to journalArticle

Abstract

Background: An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. Objective: To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. Materials and methods: We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. Results: All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. Conclusion: DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.

Original languageEnglish (US)
JournalPediatric radiology
DOIs
StatePublished - Jan 1 2019

Fingerprint

Semantics
Learning
Pediatrics
ROC Curve
Area Under Curve
Workflow
Diagnostic Imaging
Elbow
Pelvis
Human Body
Knee
Hand
Databases
Bone and Bones
Datasets

Keywords

  • Artificial intelligence
  • Children
  • Deep learning
  • Machine learning
  • Musculoskeletal
  • Radiography
  • Semantic labeling

ASJC Scopus subject areas

  • Pediatrics, Perinatology, and Child Health
  • Radiology Nuclear Medicine and imaging

Cite this

Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning. / Yi, Paul H.; Kim, Tae Kyung; Wei, Jinchi; Shin, Jiwon; Hui, Ferdinand; Sair, Haris; Hager, Gregory; Fritz, Jan.

In: Pediatric radiology, 01.01.2019.

Research output: Contribution to journalArticle

@article{9be84d0af658401c8124a6b5c7e20e0a,
title = "Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning",
abstract = "Background: An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. Objective: To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. Materials and methods: We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64{\%}), validation (12{\%}) and test (24{\%}) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. Results: All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. Conclusion: DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.",
keywords = "Artificial intelligence, Children, Deep learning, Machine learning, Musculoskeletal, Radiography, Semantic labeling",
author = "Yi, {Paul H.} and Kim, {Tae Kyung} and Jinchi Wei and Jiwon Shin and Ferdinand Hui and Haris Sair and Gregory Hager and Jan Fritz",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/s00247-019-04408-2",
language = "English (US)",
journal = "Pediatric Radiology",
issn = "0301-0449",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning

AU - Yi, Paul H.

AU - Kim, Tae Kyung

AU - Wei, Jinchi

AU - Shin, Jiwon

AU - Hui, Ferdinand

AU - Sair, Haris

AU - Hager, Gregory

AU - Fritz, Jan

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Background: An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. Objective: To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. Materials and methods: We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. Results: All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. Conclusion: DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.

AB - Background: An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. Objective: To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. Materials and methods: We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. Results: All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. Conclusion: DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.

KW - Artificial intelligence

KW - Children

KW - Deep learning

KW - Machine learning

KW - Musculoskeletal

KW - Radiography

KW - Semantic labeling

UR - http://www.scopus.com/inward/record.url?scp=85065309187&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065309187&partnerID=8YFLogxK

U2 - 10.1007/s00247-019-04408-2

DO - 10.1007/s00247-019-04408-2

M3 - Article

C2 - 31041454

AN - SCOPUS:85065309187

JO - Pediatric Radiology

JF - Pediatric Radiology

SN - 0301-0449

ER -