RADPEER quality assurance program

A multifacility study of interpretive disagreement rates

James P. Borgstede, Rebecca S. Lewis, Mythreyi Bhargavan, Jonathan H. Sunshine

Research output: Contribution to journalArticle

Abstract

Purpose: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. Method: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. Results: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P <.05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. Conclusions: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.

Original languageEnglish (US)
Pages (from-to)59-65
Number of pages7
JournalJournal of the American College of Radiology
Volume1
Issue number1
DOIs
StatePublished - 2004
Externally publishedYes

Fingerprint

Peer Review
Certification
Advisory Committees
Workload
Radiology
Maintenance
Medicine
Radiologists

Keywords

  • Disagreement rate
  • Interpretation
  • Misinterpretation
  • Observer performance
  • Quality assurance
  • RADPEER

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging
  • Radiological and Ultrasound Technology

Cite this

RADPEER quality assurance program : A multifacility study of interpretive disagreement rates. / Borgstede, James P.; Lewis, Rebecca S.; Bhargavan, Mythreyi; Sunshine, Jonathan H.

In: Journal of the American College of Radiology, Vol. 1, No. 1, 2004, p. 59-65.

Research output: Contribution to journalArticle

Borgstede, James P. ; Lewis, Rebecca S. ; Bhargavan, Mythreyi ; Sunshine, Jonathan H. / RADPEER quality assurance program : A multifacility study of interpretive disagreement rates. In: Journal of the American College of Radiology. 2004 ; Vol. 1, No. 1. pp. 59-65.
@article{a68b48db19254d6db157cb423567d9d0,
title = "RADPEER quality assurance program: A multifacility study of interpretive disagreement rates",
abstract = "Purpose: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for {"}evaluation of performance in practice{"} that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. Method: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. Results: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9{\%} of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8{\%} of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P <.05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. Conclusions: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.",
keywords = "Disagreement rate, Interpretation, Misinterpretation, Observer performance, Quality assurance, RADPEER",
author = "Borgstede, {James P.} and Lewis, {Rebecca S.} and Mythreyi Bhargavan and Sunshine, {Jonathan H.}",
year = "2004",
doi = "10.1016/S1546-1440(03)00002-4",
language = "English (US)",
volume = "1",
pages = "59--65",
journal = "Journal of the American College of Radiology",
issn = "1558-349X",
publisher = "Elsevier BV",
number = "1",

}

TY - JOUR

T1 - RADPEER quality assurance program

T2 - A multifacility study of interpretive disagreement rates

AU - Borgstede, James P.

AU - Lewis, Rebecca S.

AU - Bhargavan, Mythreyi

AU - Sunshine, Jonathan H.

PY - 2004

Y1 - 2004

N2 - Purpose: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. Method: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. Results: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P <.05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. Conclusions: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.

AB - Purpose: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. Method: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. Results: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P <.05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. Conclusions: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.

KW - Disagreement rate

KW - Interpretation

KW - Misinterpretation

KW - Observer performance

KW - Quality assurance

KW - RADPEER

UR - http://www.scopus.com/inward/record.url?scp=84928096034&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84928096034&partnerID=8YFLogxK

U2 - 10.1016/S1546-1440(03)00002-4

DO - 10.1016/S1546-1440(03)00002-4

M3 - Article

VL - 1

SP - 59

EP - 65

JO - Journal of the American College of Radiology

JF - Journal of the American College of Radiology

SN - 1558-349X

IS - 1

ER -