Stable Representations of Decision Variables for Flexible Behavior

Bilal A. Bari, Cooper D. Grossman, Emily E. Lubin, Adithya E. Rajagopalan, Jianna I. Cressy, Jeremiah Y. Cohen

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Decisions occur in dynamic environments. In the framework of reinforcement learning, the probability of performing an action is influenced by decision variables. Discrepancies between predicted and obtained rewards (reward prediction errors) update these variables, but they are otherwise stable between decisions. Although reward prediction errors have been mapped to midbrain dopamine neurons, it is unclear how the brain represents decision variables themselves. We trained mice on a dynamic foraging task in which they chose between alternatives that delivered reward with changing probabilities. Neurons in the medial prefrontal cortex, including projections to the dorsomedial striatum, maintained persistent firing rate changes over long timescales. These changes stably represented relative action values (to bias choices) and total action values (to bias response times) with slow decay. In contrast, decision variables were weakly represented in the anterolateral motor cortex, a region necessary for generating choices. Thus, we define a stable neural mechanism to drive flexible behavior. Flexible behavior requires a memory of previous interactions with the environment. The medial prefrontal cortex persistently represents value-based decision variables, bridging the time between choices. These decision variables are sent to the dorsomedial striatum to bias action selection.

Original languageEnglish (US)
Pages (from-to)922-933.e7
JournalNeuron
Volume103
Issue number5
DOIs
StatePublished - Sep 4 2019

ASJC Scopus subject areas

  • General Neuroscience

Fingerprint

Dive into the research topics of 'Stable Representations of Decision Variables for Flexible Behavior'. Together they form a unique fingerprint.

Cite this