Statistical analysis in Small-N Designs: using linear mixed-effects modeling for evaluating intervention effectiveness

Robert W. Wiley, Brenda C Rapp

Research output: Contribution to journalArticle

Abstract

Background: Advances in statistical methods and computing power have led to a renewed interest in addressing the statistical analysis challenges posed by Small-N Designs (SND). Linear mixed-effects modeling (LMEM) is a multiple regression technique that is flexible and suitable for SND and can provide standardized effect sizes and measures of statistical significance. Aims: Our primary goals are to: 1) explain LMEM at the conceptual level, situating it in the context of treatment studies, and 2) provide practical guidance for implementing LMEM in repeated measures SND. Methods & procedures: We illustrate an LMEM analysis, presenting data from a longitudinal training study of five individuals with acquired dysgraphia, analyzing both binomial (accuracy) and continuous (reaction time) repeated measurements. Outcomes & results: The LMEM analysis reveals that both spelling accuracy and reaction time improved and, for accuracy, improved significantly more quickly under a training schedule with distributed, compared to clustered, practice. We present guidance on obtaining and interpreting various effect sizes and measures of statistical significance from LMEM, and include a simulation study comparing two p-value methods for generalized LMEM. Conclusion: We provide a strong case for the application of LMEM to the analysis of training studies as a preferable alternative to visual analysis or other statistical techniques. When applied to a treatment dataset, the evidence supports that the approach holds up under the extreme conditions of small numbers of individuals, with repeated measures training data for both continuous (reaction time) and binomially distributed (accuracy) dependent measures. The approach provides standardized measures of effect sizes that are obtained through readily available and well-supported statistical packages, and provides statistically rigorous estimates of the expected average effect size of training effects, taking into account variability across both items and individuals.

Original languageEnglish (US)
Pages (from-to)1-30
Number of pages30
JournalAphasiology
DOIs
StateAccepted/In press - Mar 23 2018

Fingerprint

statistical analysis
Mathematical Computing
Agraphia
Longitudinal Studies
Appointments and Schedules
statistical significance
Statistical Analysis
Modeling
statistical method
Effect Size
data analysis
regression
Reaction Time
simulation

Keywords

  • dysgraphia
  • mixed-effects
  • Small-N designs
  • treatment study
  • tutorial

ASJC Scopus subject areas

  • Otorhinolaryngology
  • Language and Linguistics
  • Developmental and Educational Psychology
  • Linguistics and Language
  • Neurology
  • Clinical Neurology
  • LPN and LVN

Cite this

Statistical analysis in Small-N Designs : using linear mixed-effects modeling for evaluating intervention effectiveness. / Wiley, Robert W.; Rapp, Brenda C.

In: Aphasiology, 23.03.2018, p. 1-30.

Research output: Contribution to journalArticle

@article{9f96d17e422747cdb78915ae5cb5112c,
title = "Statistical analysis in Small-N Designs: using linear mixed-effects modeling for evaluating intervention effectiveness",
abstract = "Background: Advances in statistical methods and computing power have led to a renewed interest in addressing the statistical analysis challenges posed by Small-N Designs (SND). Linear mixed-effects modeling (LMEM) is a multiple regression technique that is flexible and suitable for SND and can provide standardized effect sizes and measures of statistical significance. Aims: Our primary goals are to: 1) explain LMEM at the conceptual level, situating it in the context of treatment studies, and 2) provide practical guidance for implementing LMEM in repeated measures SND. Methods & procedures: We illustrate an LMEM analysis, presenting data from a longitudinal training study of five individuals with acquired dysgraphia, analyzing both binomial (accuracy) and continuous (reaction time) repeated measurements. Outcomes & results: The LMEM analysis reveals that both spelling accuracy and reaction time improved and, for accuracy, improved significantly more quickly under a training schedule with distributed, compared to clustered, practice. We present guidance on obtaining and interpreting various effect sizes and measures of statistical significance from LMEM, and include a simulation study comparing two p-value methods for generalized LMEM. Conclusion: We provide a strong case for the application of LMEM to the analysis of training studies as a preferable alternative to visual analysis or other statistical techniques. When applied to a treatment dataset, the evidence supports that the approach holds up under the extreme conditions of small numbers of individuals, with repeated measures training data for both continuous (reaction time) and binomially distributed (accuracy) dependent measures. The approach provides standardized measures of effect sizes that are obtained through readily available and well-supported statistical packages, and provides statistically rigorous estimates of the expected average effect size of training effects, taking into account variability across both items and individuals.",
keywords = "dysgraphia, mixed-effects, Small-N designs, treatment study, tutorial",
author = "Wiley, {Robert W.} and Rapp, {Brenda C}",
year = "2018",
month = "3",
day = "23",
doi = "10.1080/02687038.2018.1454884",
language = "English (US)",
pages = "1--30",
journal = "Aphasiology",
issn = "0268-7038",
publisher = "Psychology Press Ltd",

}

TY - JOUR

T1 - Statistical analysis in Small-N Designs

T2 - using linear mixed-effects modeling for evaluating intervention effectiveness

AU - Wiley, Robert W.

AU - Rapp, Brenda C

PY - 2018/3/23

Y1 - 2018/3/23

N2 - Background: Advances in statistical methods and computing power have led to a renewed interest in addressing the statistical analysis challenges posed by Small-N Designs (SND). Linear mixed-effects modeling (LMEM) is a multiple regression technique that is flexible and suitable for SND and can provide standardized effect sizes and measures of statistical significance. Aims: Our primary goals are to: 1) explain LMEM at the conceptual level, situating it in the context of treatment studies, and 2) provide practical guidance for implementing LMEM in repeated measures SND. Methods & procedures: We illustrate an LMEM analysis, presenting data from a longitudinal training study of five individuals with acquired dysgraphia, analyzing both binomial (accuracy) and continuous (reaction time) repeated measurements. Outcomes & results: The LMEM analysis reveals that both spelling accuracy and reaction time improved and, for accuracy, improved significantly more quickly under a training schedule with distributed, compared to clustered, practice. We present guidance on obtaining and interpreting various effect sizes and measures of statistical significance from LMEM, and include a simulation study comparing two p-value methods for generalized LMEM. Conclusion: We provide a strong case for the application of LMEM to the analysis of training studies as a preferable alternative to visual analysis or other statistical techniques. When applied to a treatment dataset, the evidence supports that the approach holds up under the extreme conditions of small numbers of individuals, with repeated measures training data for both continuous (reaction time) and binomially distributed (accuracy) dependent measures. The approach provides standardized measures of effect sizes that are obtained through readily available and well-supported statistical packages, and provides statistically rigorous estimates of the expected average effect size of training effects, taking into account variability across both items and individuals.

AB - Background: Advances in statistical methods and computing power have led to a renewed interest in addressing the statistical analysis challenges posed by Small-N Designs (SND). Linear mixed-effects modeling (LMEM) is a multiple regression technique that is flexible and suitable for SND and can provide standardized effect sizes and measures of statistical significance. Aims: Our primary goals are to: 1) explain LMEM at the conceptual level, situating it in the context of treatment studies, and 2) provide practical guidance for implementing LMEM in repeated measures SND. Methods & procedures: We illustrate an LMEM analysis, presenting data from a longitudinal training study of five individuals with acquired dysgraphia, analyzing both binomial (accuracy) and continuous (reaction time) repeated measurements. Outcomes & results: The LMEM analysis reveals that both spelling accuracy and reaction time improved and, for accuracy, improved significantly more quickly under a training schedule with distributed, compared to clustered, practice. We present guidance on obtaining and interpreting various effect sizes and measures of statistical significance from LMEM, and include a simulation study comparing two p-value methods for generalized LMEM. Conclusion: We provide a strong case for the application of LMEM to the analysis of training studies as a preferable alternative to visual analysis or other statistical techniques. When applied to a treatment dataset, the evidence supports that the approach holds up under the extreme conditions of small numbers of individuals, with repeated measures training data for both continuous (reaction time) and binomially distributed (accuracy) dependent measures. The approach provides standardized measures of effect sizes that are obtained through readily available and well-supported statistical packages, and provides statistically rigorous estimates of the expected average effect size of training effects, taking into account variability across both items and individuals.

KW - dysgraphia

KW - mixed-effects

KW - Small-N designs

KW - treatment study

KW - tutorial

UR - http://www.scopus.com/inward/record.url?scp=85044268721&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044268721&partnerID=8YFLogxK

U2 - 10.1080/02687038.2018.1454884

DO - 10.1080/02687038.2018.1454884

M3 - Article

AN - SCOPUS:85044268721

SP - 1

EP - 30

JO - Aphasiology

JF - Aphasiology

SN - 0268-7038

ER -