Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error

Nicholas G. Reich, Justin Lessler, Krzysztof Sakrejda, Stephen A. Lauer, Sopon Iamsirithaworn, Derek A.T. Cummings

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.

Original languageEnglish (US)
Pages (from-to)285-292
Number of pages8
JournalAmerican Statistician
Issue number3
StatePublished - Jul 2 2016


  • Accuracy
  • Forecasting
  • Infectious disease
  • Prediction
  • Time series

ASJC Scopus subject areas

  • Statistics and Probability
  • Mathematics(all)
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error'. Together they form a unique fingerprint.

Cite this