Estimating properties of the fast and slow adaptive processes during sensorimotor adaptation

Scott T. Albert, Reza Shadmehr

Research output: Contribution to journalArticle

Abstract

Experience of a prediction error recruits multiple motor learning processes, some that learn strongly from error but have weak retention and some that learn weakly from error but exhibit strong retention. These processes are not generally observable but are inferred from their collective influence on behavior. Is there a robust way to uncover the hidden processes? A standard approach is to consider a state space model where the hidden states change following experience of error and then fit the model to the measured data by minimizing the squared error between measurement and model prediction. We found that this least-squares algorithm (LMSE) often yielded unrealistic predictions about the hidden states, possibly because of its neglect of the stochastic nature of error-based learning. We found that behavioral data during adaptation was better explained by a system in which both error-based learning and movement production were stochastic processes. To uncover the hidden states of learning, we developed a generalized expectation maximization (EM) algorithm. In simulation, we found that although LMSE tracked the measured data marginally better than EM, EM was far more accurate in unmasking the time courses and properties of the hidden states of learning. In a power analysis designed to measure the effect of an intervention on sensorimotor learning, EM significantly reduced the number of subjects that were required for effective hypothesis testing. In summary, we developed a new approach for analysis of data in sensorimotor experiments. The new algorithm improved the ability to uncover the multiple processes that contribute to learning from error. NEW & NOTEWORTHY Motor learning is supported by multiple adaptive processes, each with distinct error sensitivity and forgetting rates. We developed a generalized expectation maximization algorithm that uncovers these hidden processes in the context of modern sensorimotor learning experiments that include error-clamp trials and set breaks. The resulting toolbox may improve the ability to identify the properties of these hidden processes and reduce the number of subjects needed to test the effectiveness of interventions on sensorimotor learning.

Original languageEnglish (US)
Pages (from-to)1367-1393
Number of pages27
JournalJournal of Neurophysiology
Volume119
Issue number4
DOIs
StatePublished - Apr 1 2018

Fingerprint

Learning
Aptitude
Stochastic Processes
Space Simulation
Least-Squares Analysis

Keywords

  • Expectation maximization
  • Motor learning
  • Two-state model

ASJC Scopus subject areas

  • Neuroscience(all)
  • Physiology

Cite this

Estimating properties of the fast and slow adaptive processes during sensorimotor adaptation. / Albert, Scott T.; Shadmehr, Reza.

In: Journal of Neurophysiology, Vol. 119, No. 4, 01.04.2018, p. 1367-1393.

Research output: Contribution to journalArticle

@article{4f531c939b0d4b848b5ac72b8c761ef1,
title = "Estimating properties of the fast and slow adaptive processes during sensorimotor adaptation",
abstract = "Experience of a prediction error recruits multiple motor learning processes, some that learn strongly from error but have weak retention and some that learn weakly from error but exhibit strong retention. These processes are not generally observable but are inferred from their collective influence on behavior. Is there a robust way to uncover the hidden processes? A standard approach is to consider a state space model where the hidden states change following experience of error and then fit the model to the measured data by minimizing the squared error between measurement and model prediction. We found that this least-squares algorithm (LMSE) often yielded unrealistic predictions about the hidden states, possibly because of its neglect of the stochastic nature of error-based learning. We found that behavioral data during adaptation was better explained by a system in which both error-based learning and movement production were stochastic processes. To uncover the hidden states of learning, we developed a generalized expectation maximization (EM) algorithm. In simulation, we found that although LMSE tracked the measured data marginally better than EM, EM was far more accurate in unmasking the time courses and properties of the hidden states of learning. In a power analysis designed to measure the effect of an intervention on sensorimotor learning, EM significantly reduced the number of subjects that were required for effective hypothesis testing. In summary, we developed a new approach for analysis of data in sensorimotor experiments. The new algorithm improved the ability to uncover the multiple processes that contribute to learning from error. NEW & NOTEWORTHY Motor learning is supported by multiple adaptive processes, each with distinct error sensitivity and forgetting rates. We developed a generalized expectation maximization algorithm that uncovers these hidden processes in the context of modern sensorimotor learning experiments that include error-clamp trials and set breaks. The resulting toolbox may improve the ability to identify the properties of these hidden processes and reduce the number of subjects needed to test the effectiveness of interventions on sensorimotor learning.",
keywords = "Expectation maximization, Motor learning, Two-state model",
author = "Albert, {Scott T.} and Reza Shadmehr",
year = "2018",
month = "4",
day = "1",
doi = "10.1152/jn.00197.2017",
language = "English (US)",
volume = "119",
pages = "1367--1393",
journal = "Journal of Neurophysiology",
issn = "0022-3077",
publisher = "American Physiological Society",
number = "4",

}

TY - JOUR

T1 - Estimating properties of the fast and slow adaptive processes during sensorimotor adaptation

AU - Albert, Scott T.

AU - Shadmehr, Reza

PY - 2018/4/1

Y1 - 2018/4/1

N2 - Experience of a prediction error recruits multiple motor learning processes, some that learn strongly from error but have weak retention and some that learn weakly from error but exhibit strong retention. These processes are not generally observable but are inferred from their collective influence on behavior. Is there a robust way to uncover the hidden processes? A standard approach is to consider a state space model where the hidden states change following experience of error and then fit the model to the measured data by minimizing the squared error between measurement and model prediction. We found that this least-squares algorithm (LMSE) often yielded unrealistic predictions about the hidden states, possibly because of its neglect of the stochastic nature of error-based learning. We found that behavioral data during adaptation was better explained by a system in which both error-based learning and movement production were stochastic processes. To uncover the hidden states of learning, we developed a generalized expectation maximization (EM) algorithm. In simulation, we found that although LMSE tracked the measured data marginally better than EM, EM was far more accurate in unmasking the time courses and properties of the hidden states of learning. In a power analysis designed to measure the effect of an intervention on sensorimotor learning, EM significantly reduced the number of subjects that were required for effective hypothesis testing. In summary, we developed a new approach for analysis of data in sensorimotor experiments. The new algorithm improved the ability to uncover the multiple processes that contribute to learning from error. NEW & NOTEWORTHY Motor learning is supported by multiple adaptive processes, each with distinct error sensitivity and forgetting rates. We developed a generalized expectation maximization algorithm that uncovers these hidden processes in the context of modern sensorimotor learning experiments that include error-clamp trials and set breaks. The resulting toolbox may improve the ability to identify the properties of these hidden processes and reduce the number of subjects needed to test the effectiveness of interventions on sensorimotor learning.

AB - Experience of a prediction error recruits multiple motor learning processes, some that learn strongly from error but have weak retention and some that learn weakly from error but exhibit strong retention. These processes are not generally observable but are inferred from their collective influence on behavior. Is there a robust way to uncover the hidden processes? A standard approach is to consider a state space model where the hidden states change following experience of error and then fit the model to the measured data by minimizing the squared error between measurement and model prediction. We found that this least-squares algorithm (LMSE) often yielded unrealistic predictions about the hidden states, possibly because of its neglect of the stochastic nature of error-based learning. We found that behavioral data during adaptation was better explained by a system in which both error-based learning and movement production were stochastic processes. To uncover the hidden states of learning, we developed a generalized expectation maximization (EM) algorithm. In simulation, we found that although LMSE tracked the measured data marginally better than EM, EM was far more accurate in unmasking the time courses and properties of the hidden states of learning. In a power analysis designed to measure the effect of an intervention on sensorimotor learning, EM significantly reduced the number of subjects that were required for effective hypothesis testing. In summary, we developed a new approach for analysis of data in sensorimotor experiments. The new algorithm improved the ability to uncover the multiple processes that contribute to learning from error. NEW & NOTEWORTHY Motor learning is supported by multiple adaptive processes, each with distinct error sensitivity and forgetting rates. We developed a generalized expectation maximization algorithm that uncovers these hidden processes in the context of modern sensorimotor learning experiments that include error-clamp trials and set breaks. The resulting toolbox may improve the ability to identify the properties of these hidden processes and reduce the number of subjects needed to test the effectiveness of interventions on sensorimotor learning.

KW - Expectation maximization

KW - Motor learning

KW - Two-state model

UR - http://www.scopus.com/inward/record.url?scp=85045410541&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85045410541&partnerID=8YFLogxK

U2 - 10.1152/jn.00197.2017

DO - 10.1152/jn.00197.2017

M3 - Article

VL - 119

SP - 1367

EP - 1393

JO - Journal of Neurophysiology

JF - Journal of Neurophysiology

SN - 0022-3077

IS - 4

ER -