Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy

Evan Mayo-Wilson, Tianjing Li, Nicole Fusco, Lorenzo Bertizzolo, Joseph K. Canner, Terrie Cowley, Peter Doshi, Jeffrey Ehmsen, Gillian Gresham, Nan Guo, Jennifer A. Haythornthwaite, James Heyward, Hwanhee Hong, Diana Pham, Jennifer L. Payne, Lori Rosman, Elizabeth A. Stuart, Catalina Suarez-Cuervo, Elizabeth Tolbert, Claire TwoseSwaroop Vedula, Kay Dickersin

Research output: Contribution to journalArticle

Abstract

Objectives The objective of this study was to determine whether disagreements among multiple data sources affect systematic reviews of randomized clinical trials (RCTs). Study Design and Setting Eligible RCTs examined gabapentin for neuropathic pain and quetiapine for bipolar depression, reported in public (e.g., journal articles) and nonpublic sources (clinical study reports [CSRs] and individual participant data [IPD]). Results We found 21 gabapentin RCTs (74 reports, 6 IPD) and 7 quetiapine RCTs (50 reports, 1 IPD); most were reported in journal articles (18/21 [86%] and 6/7 [86%], respectively). When available, CSRs contained the most trial design and risk of bias information. CSRs and IPD contained the most results. For the outcome domains “pain intensity” (gabapentin) and “depression” (quetiapine), we found single trials with 68 and 98 different meta-analyzable results, respectively; by purposefully selecting one meta-analyzable result for each RCT, we could change the overall result for pain intensity from effective (standardized mean difference [SMD] = −0.45; 95% confidence interval [CI]: −0.63 to −0.27) to ineffective (SMD = −0.06; 95% CI: −0.24 to 0.12). We could change the effect for depression from a medium effect (SMD = −0.55; 95% CI: −0.85 to −0.25) to a small effect (SMD = −0.26; 95% CI: −0.41 to −0.1). Conclusions Disagreements across data sources affect the effect size, statistical significance, and interpretation of trials and meta-analyses.

Original languageEnglish (US)
Pages (from-to)95-110
Number of pages16
JournalJournal of Clinical Epidemiology
Volume91
DOIs
StatePublished - Nov 2017

Keywords

  • Clinical trials
  • Meta-analysis
  • Reporting bias
  • Risk of bias assessment
  • Selective outcome reporting
  • Systematic reviews

ASJC Scopus subject areas

  • Epidemiology

Fingerprint Dive into the research topics of 'Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy'. Together they form a unique fingerprint.

  • Cite this

    Mayo-Wilson, E., Li, T., Fusco, N., Bertizzolo, L., Canner, J. K., Cowley, T., Doshi, P., Ehmsen, J., Gresham, G., Guo, N., Haythornthwaite, J. A., Heyward, J., Hong, H., Pham, D., Payne, J. L., Rosman, L., Stuart, E. A., Suarez-Cuervo, C., Tolbert, E., ... Dickersin, K. (2017). Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy. Journal of Clinical Epidemiology, 91, 95-110. https://doi.org/10.1016/j.jclinepi.2017.07.014