Developing best practices to study trauma outcomes in large databases: An evidence-based approach to determine the best mortality risk adjustment model

Adil H. Haider, Zain G. Hashmi, Syed Nabeel Zafar, Renan Castillo, Elliott R. Haut, Eric B. Schneider, Edward E. Cornwell, Ellen J. Mackenzie, David T. Efron

Research output: Contribution to journalArticlepeer-review

Abstract

Background: The National Trauma Data Bank (NTDB) is an invaluable resource to study trauma outcomes. Recent evidence suggests the existence of great variability in covariate handling and inclusion in multivariable analyses using NTDB, leading to differences in the quality of published studies and potentially in benchmarking trauma centers. Our objectives were to identify the best possible mortality risk adjustment model (RAM) and to define the minimum number of covariates required to adequately predict trauma mortality in the NTDB. Methods: Analysis of NTDB 2009 was performed to identify the best RAM for trauma mortality. For each plausible NTDB covariate, univariate logistic regression was performed, and the area under the receiver operating characteristics curve (AUROC, with 95% confidence interval [CI]) was calculated. Covariates with p < 0.01 and an AUROC of 0.6 of greater or with strong previous evidence were included in the subsequent multivariate logistic regression analyses. Manual backward selection was then used to identify the most parsimonious RAM with a similar AUROC (overlapping 95% CI). Similar analyses were performed for penetrating and severely injured patient subsets. All models were validated using NTDB 2010. Results: A total of 630,307 patients from NTDB 2009 were analyzed. A total of 16 of 106 NTDB covariates tested on univariate analyses were selected for inclusion in the initial multivariate model. The best RAM included only six covariates (age, hypotension, pulse, total Glasgow Coma Scale [GCS] score, Injury Severity Score [ISS], and a need for ventilator use) yet still demonstrated excellent discrimination between survivors and nonsurvivors (AUROC, 0.9578; 95% CI, 0.9565-0.9590). In addition, this model was validated on 665,138 patients included in NTDB 2010 (AUROC, 0.9577; 95% CI, 0.9564-0.9589). Similar results were obtained for the subset analyses. Conclusion: This quantitative synthesis proposes a framework and a set of covariates for studying trauma mortality outcomes. Such analytic standardization may prove critical in implementing best practices aimed at improving the quality and consistency of NTDB-based research. LEVEL OF EVIDENCE: Prognostic study, level III.

Original languageEnglish (US)
Pages (from-to)1061-1069
Number of pages9
JournalJournal of Trauma and Acute Care Surgery
Volume76
Issue number4
DOIs
StatePublished - Apr 2014

Keywords

  • NTDB
  • Risk adjustment
  • benchmarking
  • trauma mortality

ASJC Scopus subject areas

  • Surgery
  • Critical Care and Intensive Care Medicine

Fingerprint Dive into the research topics of 'Developing best practices to study trauma outcomes in large databases: An evidence-based approach to determine the best mortality risk adjustment model'. Together they form a unique fingerprint.

Cite this