Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs

Tae Kyung Kim, Paul H. Yi, Jinchi Wei, Ji Won Shin, Gregory Hager, Ferdinand Hui, Haris Sair, Cheng Lin

Research output: Contribution to journalArticle

Abstract

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

Original languageEnglish (US)
JournalJournal of Digital Imaging
DOIs
StateAccepted/In press - Jan 1 2019

Fingerprint

Pediatrics
Thorax
Learning
Neural networks
Learning systems
Area Under Curve
Databases
Sensitivity and Specificity
Quality assurance
Labeling
Learning algorithms
Quality control
Datasets
Deep learning
ROC Curve
Quality Control
Machine Learning

Keywords

  • Artificial intelligence
  • Deep convoluted neural networks
  • Deep learning
  • PACS

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Science Applications

Cite this

Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. / Kim, Tae Kyung; Yi, Paul H.; Wei, Jinchi; Shin, Ji Won; Hager, Gregory; Hui, Ferdinand; Sair, Haris; Lin, Cheng.

In: Journal of Digital Imaging, 01.01.2019.

Research output: Contribution to journalArticle

@article{514cbee47259465b8ba433b24c249682,
title = "Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs",
abstract = "Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95{\%})) and pediatric (5941 (5{\%})) patients consisting of 44,810 (40{\%}) AP and 67,310 (60{\%}) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49{\%}) AP and 3056 (51{\%}) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6{\%} and 98{\%}, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6{\%} and 99.5{\%}, respectively, for the DCNN trained on the entire dataset and 98{\%} for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95{\%}. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.",
keywords = "Artificial intelligence, Deep convoluted neural networks, Deep learning, PACS",
author = "Kim, {Tae Kyung} and Yi, {Paul H.} and Jinchi Wei and Shin, {Ji Won} and Gregory Hager and Ferdinand Hui and Haris Sair and Cheng Lin",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/s10278-019-00208-0",
language = "English (US)",
journal = "Journal of Digital Imaging",
issn = "0897-1889",
publisher = "Springer New York",

}

TY - JOUR

T1 - Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs

AU - Kim, Tae Kyung

AU - Yi, Paul H.

AU - Wei, Jinchi

AU - Shin, Ji Won

AU - Hager, Gregory

AU - Hui, Ferdinand

AU - Sair, Haris

AU - Lin, Cheng

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

AB - Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

KW - Artificial intelligence

KW - Deep convoluted neural networks

KW - Deep learning

KW - PACS

UR - http://www.scopus.com/inward/record.url?scp=85072025253&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072025253&partnerID=8YFLogxK

U2 - 10.1007/s10278-019-00208-0

DO - 10.1007/s10278-019-00208-0

M3 - Article

C2 - 30972585

AN - SCOPUS:85072025253

JO - Journal of Digital Imaging

JF - Journal of Digital Imaging

SN - 0897-1889

ER -