The utility of multivariate outlier detection techniques for data quality evaluation in large studies: An application within the ONDRI project

Kelly M. Sunderland, Derek Beaton, Julia Fraser, Donna Kwan, Paula M. McLaughlin, Manuel Montero-Odasso, Alicia J. Peltsch, Frederico Pieruccini-Faria, Demetrios J. Sahlas, Richard H. Swartz, Robert Bartha, Sandra E. Black, Michael Borrie, Dale Corbett, Elizabeth Finger, Morris Freedman, Barry Greenberg, David A. Grimes, Robert A. Hegele, Chris HudsonAnthony E. Lang, Mario Masellis, William E. McIlroy, David G. Munoz, Douglas P. Munoz, J. B. Orange, Michael J. Strong, Sean Symons, Maria Carmela Tartaglia, Angela Troyer, Lorne Zinman, Stephen C. Strother, Malcolm A. Binns

Research output: Contribution to journalArticlepeer-review

16 Scopus citations


Background: Large and complex studies are now routine, and quality assurance and quality control (QC) procedures ensure reliable results and conclusions. Standard procedures may comprise manual verification and double entry, but these labour-intensive methods often leave errors undetected. Outlier detection uses a data-driven approach to identify patterns exhibited by the majority of the data and highlights data points that deviate from these patterns. Univariate methods consider each variable independently, so observations that appear odd only when two or more variables are considered simultaneously remain undetected. We propose a data quality evaluation process that emphasizes the use of multivariate outlier detection for identifying errors, and show that univariate approaches alone are insufficient. Further, we establish an iterative process that uses multiple multivariate approaches, communication between teams, and visualization for other large-scale projects to follow. Methods: We illustrate this process with preliminary neuropsychology and gait data for the vascular cognitive impairment cohort from the Ontario Neurodegenerative Disease Research Initiative, a multi-cohort observational study that aims to characterize biomarkers within and between five neurodegenerative diseases. Each dataset was evaluated four times: with and without covariate adjustment using two validated multivariate methods - Minimum Covariance Determinant (MCD) and Candès' Robust Principal Component Analysis (RPCA) - and results were assessed in relation to two univariate methods. Outlying participants identified by multiple multivariate analyses were compiled and communicated to the data teams for verification. Results: Of 161 and 148 participants in the neuropsychology and gait data, 44 and 43 were flagged by one or both multivariate methods and errors were identified for 8 and 5 participants, respectively. MCD identified all participants with errors, while RPCA identified 6/8 and 3/5 for the neuropsychology and gait data, respectively. Both outperformed univariate approaches. Adjusting for covariates had a minor effect on the participants identified as outliers, though did affect error detection. Conclusions: Manual QC procedures are insufficient for large studies as many errors remain undetected. In these data, the MCD outperforms the RPCA for identifying errors, and both are more successful than univariate approaches. Therefore, data-driven multivariate outlier techniques are essential tools for QC as data become more complex.

Original languageEnglish (US)
Article number102
JournalBMC medical research methodology
Issue number1
StatePublished - May 15 2019


  • Minimum covariance determinant
  • Multivariate outliers
  • Principal component analysis
  • Quality control
  • Visualization

ASJC Scopus subject areas

  • Epidemiology
  • Health Informatics


Dive into the research topics of 'The utility of multivariate outlier detection techniques for data quality evaluation in large studies: An application within the ONDRI project'. Together they form a unique fingerprint.

Cite this