A randomized trial of ways to describe test accuracy: The effect on physicians' post-test probability estimates

Milo A. Puhan, Johann Steurer, Lucas M. Bachmann, Gerben Ter Riet

Research output: Contribution to journalArticle

Abstract

Background: Some people believe that likelihood ratios provide diagnostic information that is more useful than sensitivity and specificity estimates. Objective: To assess how physicians' estimates about probability of illness are affected by the presentation of a diagnostic test's value as an estimate of sensitivity and specificity versus a likelihood ratio or an inexact numerical graphic. Design: Random assignment of vignettes with different presentation formats of diagnostic test accuracy. Setting: Auditorium at a continuing medical education conference. Participants: 183 physicians. Intervention: After estimating probabilities of 6 common illnesses described in patient vignettes, physicians reviewed pertinent test results presented in 1 of 3 formats. Measurements: Physicians' probability estimates of illness before and after receiving test information, and post-test probability estimates based on the Bayes theorem. Results: Absolute percentage point differences between the physicians' estimated and the Bayes-based post-test probabilities varied from -7 to 31, from -7 to 28, and from 1 to 29 for the sensitivity and specificity, likelihood ratio, and graphical groups, respectively. Mean differences of probability estimates between the sensitivity and specificity and the likelihood ratio groups were small for all vignettes (-2 to 3 percentage points; summary mean z value across the 6 vignettes, 0.04 [95% CI, -0.14 to 0.21]). Limitations: The small pool of participants (who were potentially selected) and the limited number of vignettes prevented a more detailed analysis of relationships between the interpreted strength of diagnostic evidence and estimations of illness probability. Conclusions: These findings suggest that presenting diagnostic test accuracy with likelihood ratios does not affect some physicians' estimates of illness probability compared with presenting diagnostic test results as sensitivity and specificity.

Original languageEnglish (US)
Pages (from-to)184-189
Number of pages6
JournalAnnals of Internal Medicine
Volume143
Issue number3
StatePublished - Aug 2 2005
Externally publishedYes

Fingerprint

Physicians
Routine Diagnostic Tests
Sensitivity and Specificity
Continuing Medical Education
Bayes Theorem

ASJC Scopus subject areas

  • Medicine(all)

Cite this

A randomized trial of ways to describe test accuracy : The effect on physicians' post-test probability estimates. / Puhan, Milo A.; Steurer, Johann; Bachmann, Lucas M.; Ter Riet, Gerben.

In: Annals of Internal Medicine, Vol. 143, No. 3, 02.08.2005, p. 184-189.

Research output: Contribution to journalArticle

Puhan, Milo A. ; Steurer, Johann ; Bachmann, Lucas M. ; Ter Riet, Gerben. / A randomized trial of ways to describe test accuracy : The effect on physicians' post-test probability estimates. In: Annals of Internal Medicine. 2005 ; Vol. 143, No. 3. pp. 184-189.
@article{a587eba7d56946039cf2cbfd56c53e39,
title = "A randomized trial of ways to describe test accuracy: The effect on physicians' post-test probability estimates",
abstract = "Background: Some people believe that likelihood ratios provide diagnostic information that is more useful than sensitivity and specificity estimates. Objective: To assess how physicians' estimates about probability of illness are affected by the presentation of a diagnostic test's value as an estimate of sensitivity and specificity versus a likelihood ratio or an inexact numerical graphic. Design: Random assignment of vignettes with different presentation formats of diagnostic test accuracy. Setting: Auditorium at a continuing medical education conference. Participants: 183 physicians. Intervention: After estimating probabilities of 6 common illnesses described in patient vignettes, physicians reviewed pertinent test results presented in 1 of 3 formats. Measurements: Physicians' probability estimates of illness before and after receiving test information, and post-test probability estimates based on the Bayes theorem. Results: Absolute percentage point differences between the physicians' estimated and the Bayes-based post-test probabilities varied from -7 to 31, from -7 to 28, and from 1 to 29 for the sensitivity and specificity, likelihood ratio, and graphical groups, respectively. Mean differences of probability estimates between the sensitivity and specificity and the likelihood ratio groups were small for all vignettes (-2 to 3 percentage points; summary mean z value across the 6 vignettes, 0.04 [95{\%} CI, -0.14 to 0.21]). Limitations: The small pool of participants (who were potentially selected) and the limited number of vignettes prevented a more detailed analysis of relationships between the interpreted strength of diagnostic evidence and estimations of illness probability. Conclusions: These findings suggest that presenting diagnostic test accuracy with likelihood ratios does not affect some physicians' estimates of illness probability compared with presenting diagnostic test results as sensitivity and specificity.",
author = "Puhan, {Milo A.} and Johann Steurer and Bachmann, {Lucas M.} and {Ter Riet}, Gerben",
year = "2005",
month = "8",
day = "2",
language = "English (US)",
volume = "143",
pages = "184--189",
journal = "Annals of Internal Medicine",
issn = "0003-4819",
publisher = "American College of Physicians",
number = "3",

}

TY - JOUR

T1 - A randomized trial of ways to describe test accuracy

T2 - The effect on physicians' post-test probability estimates

AU - Puhan, Milo A.

AU - Steurer, Johann

AU - Bachmann, Lucas M.

AU - Ter Riet, Gerben

PY - 2005/8/2

Y1 - 2005/8/2

N2 - Background: Some people believe that likelihood ratios provide diagnostic information that is more useful than sensitivity and specificity estimates. Objective: To assess how physicians' estimates about probability of illness are affected by the presentation of a diagnostic test's value as an estimate of sensitivity and specificity versus a likelihood ratio or an inexact numerical graphic. Design: Random assignment of vignettes with different presentation formats of diagnostic test accuracy. Setting: Auditorium at a continuing medical education conference. Participants: 183 physicians. Intervention: After estimating probabilities of 6 common illnesses described in patient vignettes, physicians reviewed pertinent test results presented in 1 of 3 formats. Measurements: Physicians' probability estimates of illness before and after receiving test information, and post-test probability estimates based on the Bayes theorem. Results: Absolute percentage point differences between the physicians' estimated and the Bayes-based post-test probabilities varied from -7 to 31, from -7 to 28, and from 1 to 29 for the sensitivity and specificity, likelihood ratio, and graphical groups, respectively. Mean differences of probability estimates between the sensitivity and specificity and the likelihood ratio groups were small for all vignettes (-2 to 3 percentage points; summary mean z value across the 6 vignettes, 0.04 [95% CI, -0.14 to 0.21]). Limitations: The small pool of participants (who were potentially selected) and the limited number of vignettes prevented a more detailed analysis of relationships between the interpreted strength of diagnostic evidence and estimations of illness probability. Conclusions: These findings suggest that presenting diagnostic test accuracy with likelihood ratios does not affect some physicians' estimates of illness probability compared with presenting diagnostic test results as sensitivity and specificity.

AB - Background: Some people believe that likelihood ratios provide diagnostic information that is more useful than sensitivity and specificity estimates. Objective: To assess how physicians' estimates about probability of illness are affected by the presentation of a diagnostic test's value as an estimate of sensitivity and specificity versus a likelihood ratio or an inexact numerical graphic. Design: Random assignment of vignettes with different presentation formats of diagnostic test accuracy. Setting: Auditorium at a continuing medical education conference. Participants: 183 physicians. Intervention: After estimating probabilities of 6 common illnesses described in patient vignettes, physicians reviewed pertinent test results presented in 1 of 3 formats. Measurements: Physicians' probability estimates of illness before and after receiving test information, and post-test probability estimates based on the Bayes theorem. Results: Absolute percentage point differences between the physicians' estimated and the Bayes-based post-test probabilities varied from -7 to 31, from -7 to 28, and from 1 to 29 for the sensitivity and specificity, likelihood ratio, and graphical groups, respectively. Mean differences of probability estimates between the sensitivity and specificity and the likelihood ratio groups were small for all vignettes (-2 to 3 percentage points; summary mean z value across the 6 vignettes, 0.04 [95% CI, -0.14 to 0.21]). Limitations: The small pool of participants (who were potentially selected) and the limited number of vignettes prevented a more detailed analysis of relationships between the interpreted strength of diagnostic evidence and estimations of illness probability. Conclusions: These findings suggest that presenting diagnostic test accuracy with likelihood ratios does not affect some physicians' estimates of illness probability compared with presenting diagnostic test results as sensitivity and specificity.

UR - http://www.scopus.com/inward/record.url?scp=23044481653&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=23044481653&partnerID=8YFLogxK

M3 - Article

C2 - 16061916

AN - SCOPUS:23044481653

VL - 143

SP - 184

EP - 189

JO - Annals of Internal Medicine

JF - Annals of Internal Medicine

SN - 0003-4819

IS - 3

ER -