RADPEER quality assurance program: A multifacility study of interpretive disagreement rates

James P. Borgstede, Rebecca S. Lewis, Mythreyi Bhargavan, Jonathan H. Sunshine

Research output: Contribution to journalArticle

Abstract

Purpose: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. Method: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. Results: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P < .05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. Conclusions: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.

Original languageEnglish (US)
Pages (from-to)59-65
Number of pages7
JournalJournal of the American College of Radiology
Volume1
Issue number1
DOIs
StatePublished - Jan 2004

Keywords

  • Disagreement rate
  • Interpretation
  • Misinterpretation
  • Observer performance
  • Quality assurance
  • RADPEER

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Fingerprint Dive into the research topics of 'RADPEER quality assurance program: A multifacility study of interpretive disagreement rates'. Together they form a unique fingerprint.

Cite this