Development of a case report review instrument

Research output: Contribution to journalArticle

Abstract

Case reports are valued components of the medical literature. The assessment of case reports by editors of medical journals and peer reviewers is largely subjective. The purpose of this study was to develop a reliable instrument to evaluate the quality of written case reports. Instrument development involved review of the literature and the materials provided to peer reviewers who review manuscripts, communications with journal editors and discussions of the study team. After multiple amendments, the instrument was pilot tested on both published and unpublished case reports. Further revisions resulted in the final 11-item tool. Four independent reviewers evaluated 28 case reports in their original submission format that had been submitted to five medical journals. The reviewers were blinded to the specific journal that the manuscripts had been submitted and to whether the case reports had been accepted for publication. Inter-rater reliability was assessed using multirater kappa. Inter-rater reliability ranged from 0.03 to 0.90. The four variables with the highest agreement between raters were (i) rationale for writing the case report; (ii) implications of the case report; (iii) adequacy of the literature review; and (iv) overall impression about whether to accept or reject the manuscript (kappas of 0.67, 0.67, 0.90 and 0.67, respectively). Six of the instrument's first 10 variables were highly correlated with the reviewers' decision about whether to accept or reject the case report for publication (item 11) (all p <0.001). No correlation existed between the reviewers' decision to accept or reject the manuscript and the actual decision that had been made by the various journals. The case report review instrument is the first such tool for objectively evaluating case reports and appears to have reasonable reliability. Medical journals may wish to incorporate the use of this instrument into the decision making about a case report's suitability for publication.

Original languageEnglish (US)
Pages (from-to)457-461
Number of pages5
JournalInternational Journal of Clinical Practice
Volume59
Issue number4
DOIs
StatePublished - Apr 2005

Fingerprint

Manuscripts
Publications
Decision Making

Keywords

  • Case report
  • Peer review

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Development of a case report review instrument. / Ramulu, Vandana; Levine, Rachel; Hebert, R. S.; Wright, Scott.

In: International Journal of Clinical Practice, Vol. 59, No. 4, 04.2005, p. 457-461.

Research output: Contribution to journalArticle

@article{56476e59ab5d4cbcb0edbe16421af031,
title = "Development of a case report review instrument",
abstract = "Case reports are valued components of the medical literature. The assessment of case reports by editors of medical journals and peer reviewers is largely subjective. The purpose of this study was to develop a reliable instrument to evaluate the quality of written case reports. Instrument development involved review of the literature and the materials provided to peer reviewers who review manuscripts, communications with journal editors and discussions of the study team. After multiple amendments, the instrument was pilot tested on both published and unpublished case reports. Further revisions resulted in the final 11-item tool. Four independent reviewers evaluated 28 case reports in their original submission format that had been submitted to five medical journals. The reviewers were blinded to the specific journal that the manuscripts had been submitted and to whether the case reports had been accepted for publication. Inter-rater reliability was assessed using multirater kappa. Inter-rater reliability ranged from 0.03 to 0.90. The four variables with the highest agreement between raters were (i) rationale for writing the case report; (ii) implications of the case report; (iii) adequacy of the literature review; and (iv) overall impression about whether to accept or reject the manuscript (kappas of 0.67, 0.67, 0.90 and 0.67, respectively). Six of the instrument's first 10 variables were highly correlated with the reviewers' decision about whether to accept or reject the case report for publication (item 11) (all p <0.001). No correlation existed between the reviewers' decision to accept or reject the manuscript and the actual decision that had been made by the various journals. The case report review instrument is the first such tool for objectively evaluating case reports and appears to have reasonable reliability. Medical journals may wish to incorporate the use of this instrument into the decision making about a case report's suitability for publication.",
keywords = "Case report, Peer review",
author = "Vandana Ramulu and Rachel Levine and Hebert, {R. S.} and Scott Wright",
year = "2005",
month = "4",
doi = "10.1111/j.1368-5031.2005.00319.x",
language = "English (US)",
volume = "59",
pages = "457--461",
journal = "International Journal of Clinical Practice",
issn = "1368-5031",
publisher = "Wiley-Blackwell",
number = "4",

}

TY - JOUR

T1 - Development of a case report review instrument

AU - Ramulu, Vandana

AU - Levine, Rachel

AU - Hebert, R. S.

AU - Wright, Scott

PY - 2005/4

Y1 - 2005/4

N2 - Case reports are valued components of the medical literature. The assessment of case reports by editors of medical journals and peer reviewers is largely subjective. The purpose of this study was to develop a reliable instrument to evaluate the quality of written case reports. Instrument development involved review of the literature and the materials provided to peer reviewers who review manuscripts, communications with journal editors and discussions of the study team. After multiple amendments, the instrument was pilot tested on both published and unpublished case reports. Further revisions resulted in the final 11-item tool. Four independent reviewers evaluated 28 case reports in their original submission format that had been submitted to five medical journals. The reviewers were blinded to the specific journal that the manuscripts had been submitted and to whether the case reports had been accepted for publication. Inter-rater reliability was assessed using multirater kappa. Inter-rater reliability ranged from 0.03 to 0.90. The four variables with the highest agreement between raters were (i) rationale for writing the case report; (ii) implications of the case report; (iii) adequacy of the literature review; and (iv) overall impression about whether to accept or reject the manuscript (kappas of 0.67, 0.67, 0.90 and 0.67, respectively). Six of the instrument's first 10 variables were highly correlated with the reviewers' decision about whether to accept or reject the case report for publication (item 11) (all p <0.001). No correlation existed between the reviewers' decision to accept or reject the manuscript and the actual decision that had been made by the various journals. The case report review instrument is the first such tool for objectively evaluating case reports and appears to have reasonable reliability. Medical journals may wish to incorporate the use of this instrument into the decision making about a case report's suitability for publication.

AB - Case reports are valued components of the medical literature. The assessment of case reports by editors of medical journals and peer reviewers is largely subjective. The purpose of this study was to develop a reliable instrument to evaluate the quality of written case reports. Instrument development involved review of the literature and the materials provided to peer reviewers who review manuscripts, communications with journal editors and discussions of the study team. After multiple amendments, the instrument was pilot tested on both published and unpublished case reports. Further revisions resulted in the final 11-item tool. Four independent reviewers evaluated 28 case reports in their original submission format that had been submitted to five medical journals. The reviewers were blinded to the specific journal that the manuscripts had been submitted and to whether the case reports had been accepted for publication. Inter-rater reliability was assessed using multirater kappa. Inter-rater reliability ranged from 0.03 to 0.90. The four variables with the highest agreement between raters were (i) rationale for writing the case report; (ii) implications of the case report; (iii) adequacy of the literature review; and (iv) overall impression about whether to accept or reject the manuscript (kappas of 0.67, 0.67, 0.90 and 0.67, respectively). Six of the instrument's first 10 variables were highly correlated with the reviewers' decision about whether to accept or reject the case report for publication (item 11) (all p <0.001). No correlation existed between the reviewers' decision to accept or reject the manuscript and the actual decision that had been made by the various journals. The case report review instrument is the first such tool for objectively evaluating case reports and appears to have reasonable reliability. Medical journals may wish to incorporate the use of this instrument into the decision making about a case report's suitability for publication.

KW - Case report

KW - Peer review

UR - http://www.scopus.com/inward/record.url?scp=16344380750&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=16344380750&partnerID=8YFLogxK

U2 - 10.1111/j.1368-5031.2005.00319.x

DO - 10.1111/j.1368-5031.2005.00319.x

M3 - Article

C2 - 15853865

AN - SCOPUS:16344380750

VL - 59

SP - 457

EP - 461

JO - International Journal of Clinical Practice

JF - International Journal of Clinical Practice

SN - 1368-5031

IS - 4

ER -