Current status and future wish list of peer review: A national questionnaire of U.S. Radiologists

Cindy S. Lee, Christopher Neumann, Priyanka Jha, Deborah A. Baumgarten, Linda Chu, Marie Surovitsky, Nadja Kadom

Research output: Contribution to journalArticlepeer-review

Abstract

OBJECTIVE. Most peer review programs focus on error detection, numeric scoring, and radiologist-specific error rates. The effectiveness of this method on learning and systematic improvement is uncertain at best. Radiologists have been pushing for a transition from an individually punitive peer review system to a peer-learning model. This national questionnaire of U.S. radiologists aims to assess the current status of peer review and opportunities for improvement. MATERIALS AND METHODS. A 21-question multiple-choice questionnaire was developed and face validity assessed by the ARRS Performance Quality Improvement subcommittee. The questionnaire was e-mailed to 17,695 ARRS members and open for 4 weeks; two e-mail reminders were sent. Response collection was anonymous. Only responses from board-certified, practicing radiologists participating in peer review were analyzed. RESULTS. The response rate was 4.2% (742/17,695), and 73.7% (547/742) met inclusion criteria. Most responders were in private practice (51.7%, 283/547) with a group size of 11-50 radiologists (50.5%) and in an urban setting (61.6%). Significant diversity was noted in peer review systems, with RADPEER used by less than half (45.0%) and cases selected most commonly by commercial software (36.2%) or manually (31.2%). There was no consensus on the number of required peer reviews per month (10-20 cases, 32.1%; > 20 cases, 29.1%; < 10 cases, 21.7%). Less than half (43.7%) did not use peer review for group education. Whereas most (67.7%) were notified of their peer review results individually, 21.5% were not notified at all. Around half were dissatisfied (44.5%) because of insufficient learning (94.0%) and inaccurate representation of their performance improvement (75.5%). Overall, the group discrepancy rates were unknown to most radiologists who participate in peer review (54.3%). Submission bias was the main reason for underreporting of serious discrepancies (49.0%). Most found four peerlearning methods feasible in daily practice: Incidental observation, 65.1%; focused practice review, 52.9%; professional auditing, 45.8%; and blinded double reading, 35.4%. CONCLUSION. More than half of participants reported that peer review data are used for educational purposes. However, significant diversity remains in current peer review practice with no agreement on number of required reviews, method of case selection, and oversight of results. Nearly half of the radiologists reported insufficient learning, although most feel a better system would be feasible in daily practice.

Original languageEnglish (US)
Pages (from-to)493-497
Number of pages5
JournalAmerican Journal of Roentgenology
Volume214
Issue number3
DOIs
StatePublished - 2020

Keywords

  • Competency
  • Peer learning
  • Peer review
  • Questionnaire
  • Radiologist

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Fingerprint Dive into the research topics of 'Current status and future wish list of peer review: A national questionnaire of U.S. Radiologists'. Together they form a unique fingerprint.

Cite this