Quantifying over-estimation in early stopped clinical trials and the "freezing effect" on subsequent research

Hao Wang, Gary L. Rosner, Steven N. Goodman

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Despite the wide use of the design with statistical stopping guidelines to stop a randomized clinical trial early for efficacy, there are unsettled debates of potential harmful consequences of such designs. These concerns include the possible over-estimation of treatment effects in early stopped trials and a newer argument of a "freezing effect" that will halt future randomized clinical trials on the same comparison since an early stopped trial represents an effective declaration that randomization to the unfavored arm is unethical. The purpose of this study is to determine the degree of bias in designs that allow for early stopping and to assess the impact on estimation if indeed future experimentation is "frozen" by an early stopped trial. Methods: We perform simulations to study the effect of early stopping. We simulate a collection of trials and contrast the treatment-effect estimates (risk differences and ratios) with the simulation truth. Simulations consider various scenarios of between-study variation, including an empirically derived distribution of effects from the clinical literature. Results: Across the trials whose true effects are sampled from a uniform distribution, estimates from trials that stop early for efficacy deviate minimally from the simulation truth (median bias of the estimate of risk difference is 0.005). Over-estimation becomes appreciable only when the true effect is close to the null value 0 (median bias of the risk difference estimate is 0.04) or when stopping happens with 40% information or less; however, stopping under these situations is rare. We also find slight reverse bias of the estimated treatment effect (median bias of the risk difference estimate is -0.002) among trials that do not cross the early stopping boundaries but continue to the final analysis. Similar results occur with relative risk estimates. In contrast, Bayesian estimation of the treatment effect shrinks the estimate from trials stopping early and pulls back under-estimation from completed trials, largely rectifying any over-estimation among trials that terminate early. Regarding the so-called freezing effect, the pooled effects from meta-analyses that include truncated randomized clinical trials show an unimportant deviation from the true value, even when no subsequent trials are conducted after a truncated randomized clinical trial. Conclusion: Group sequential designs with stopping rules seek to minimize exposure of patients to a disfavored therapy and speed dissemination of results, and such designs do not lead to materially biased estimates. The likelihood and magnitude of a "freezing effect" is minimal. Superiority demonstrated in a randomized clinical trial stopping early and designed with appropriate statistical stopping rules is likely a valid inference, even if the estimate may be slightly inflated.

Original languageEnglish (US)
Pages (from-to)621-631
Number of pages11
JournalClinical Trials
Volume13
Issue number6
DOIs
StatePublished - Dec 1 2016

Keywords

  • "freezing effect"
  • Bayesian inference
  • Clinical trial methodology
  • bias
  • early stopping
  • meta-analysis
  • over-estimate
  • treatment effect

ASJC Scopus subject areas

  • Pharmacology

Fingerprint Dive into the research topics of 'Quantifying over-estimation in early stopped clinical trials and the "freezing effect" on subsequent research'. Together they form a unique fingerprint.

Cite this