As we move through the world, information can be combined from multiple sources in order to allow us to perceive our self-motion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.
ASJC Scopus subject areas