Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error

Nicholas G. Reich, Justin T Lessler, Krzysztof Sakrejda, Stephen A. Lauer, Sopon Iamsirithaworn, Derek A T Cummings

Research output: Contribution to journalArticle

Abstract

Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.

Original languageEnglish (US)
Pages (from-to)285-292
Number of pages8
JournalAmerican Statistician
Volume70
Issue number3
DOIs
StatePublished - Jul 2 2016

Fingerprint

Time Series Prediction
Time Series Models
Prediction Model
Metric
Prediction
Multiple Time Series
Interpretability
Reference Model
Performance Metrics
Statistical Model
Intuitive
Incidence
Simplicity
Time series
Decision Making
Benchmark
Sufficient
Prediction model
Model
Demonstrate

Keywords

  • Accuracy
  • Forecasting
  • Infectious disease
  • Prediction
  • Time series

ASJC Scopus subject areas

  • Statistics and Probability
  • Mathematics(all)
  • Statistics, Probability and Uncertainty

Cite this

Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error. / Reich, Nicholas G.; Lessler, Justin T; Sakrejda, Krzysztof; Lauer, Stephen A.; Iamsirithaworn, Sopon; Cummings, Derek A T.

In: American Statistician, Vol. 70, No. 3, 02.07.2016, p. 285-292.

Research output: Contribution to journalArticle

Reich, Nicholas G. ; Lessler, Justin T ; Sakrejda, Krzysztof ; Lauer, Stephen A. ; Iamsirithaworn, Sopon ; Cummings, Derek A T. / Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error. In: American Statistician. 2016 ; Vol. 70, No. 3. pp. 285-292.
@article{ba7a0190ea104e5885947b1e575e72eb,
title = "Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error",
abstract = "Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against na{\"i}ve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.",
keywords = "Accuracy, Forecasting, Infectious disease, Prediction, Time series",
author = "Reich, {Nicholas G.} and Lessler, {Justin T} and Krzysztof Sakrejda and Lauer, {Stephen A.} and Sopon Iamsirithaworn and Cummings, {Derek A T}",
year = "2016",
month = "7",
day = "2",
doi = "10.1080/00031305.2016.1148631",
language = "English (US)",
volume = "70",
pages = "285--292",
journal = "American Statistician",
issn = "0003-1305",
publisher = "American Statistical Association",
number = "3",

}

TY - JOUR

T1 - Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error

AU - Reich, Nicholas G.

AU - Lessler, Justin T

AU - Sakrejda, Krzysztof

AU - Lauer, Stephen A.

AU - Iamsirithaworn, Sopon

AU - Cummings, Derek A T

PY - 2016/7/2

Y1 - 2016/7/2

N2 - Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.

AB - Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions.

KW - Accuracy

KW - Forecasting

KW - Infectious disease

KW - Prediction

KW - Time series

UR - http://www.scopus.com/inward/record.url?scp=84981731393&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84981731393&partnerID=8YFLogxK

U2 - 10.1080/00031305.2016.1148631

DO - 10.1080/00031305.2016.1148631

M3 - Article

C2 - 28138198

AN - SCOPUS:84981731393

VL - 70

SP - 285

EP - 292

JO - American Statistician

JF - American Statistician

SN - 0003-1305

IS - 3

ER -