1-3 of 3 Results

  • Keywords: computation x
Clear all


Jeremy C. Borniger and Luis de Lecea

The hypocretins (also known as orexins) are selectively expressed in a subset of lateral hypothalamic neurons. Since the reports of their discovery in 1998, they have been intensely investigated in relation to their role in sleep/wake transitions, feeding, reward, drug abuse, and motivated behavior. This research has cemented their role as a subcortical relay optimized to tune arousal in response to various salient stimuli. This article reviews their discovery, physiological modulation, circuitry, and integrative functionality contributing to vigilance state transitions and stability. Specific emphasis is placed on humoral and neural inputs regulating hcrt neural function and new evidence for an autoimmune basis of the sleep disorder narcolepsy. Future directions for this field involve dissection of the heterogeneity of this neural population using single-cell transcriptomics, optogenetic, and chemogenetics, as well as monitoring population and single cell activity. Computational models of the hypocretin network, using the “flip-flop” or “integrator neuron” frameworks, provide a fundamental understanding of how this neural population influences brain-wide activity and behavior.


Kenway Louie and Paul W. Glimcher

A core question in systems and computational neuroscience is how the brain represents information. Identifying principles of information coding in neural circuits is critical to understanding brain organization and function in sensory, motor, and cognitive neuroscience. This provides a conceptual bridge between the underlying biophysical mechanisms and the ultimate behavioral goals of the organism. Central to this framework is the question of computation: what are the relevant representations of input and output, and what algorithms govern the input-output transformation? Remarkably, evidence suggests that certain canonical computations exist across different circuits, brain regions, and species. Such computations are implemented by different biophysical and network mechanisms, indicating that the unifying target of conservation is the algorithmic form of information processing rather than the specific biological implementation. A prime candidate to serve as a canonical computation is divisive normalization, which scales the activity of a given neuron by the activity of a larger neuronal pool. This nonlinear transformation introduces an intrinsic contextual modulation into information coding, such that the selective response of a neuron to features of the input is scaled by other input characteristics. This contextual modulation allows the normalization model to capture a wide array of neural and behavioral phenomena not captured by simpler linear models of information processing. The generality and flexibility of the normalization model arises from the normalization pool, which allows different inputs to directly drive and suppress a given neuron, effectively separating information that drives excitation and contextual modulation. Originally proposed to describe responses in early visual cortex, normalization has been widely documented in different brain regions, hierarchical levels, and modalities of sensory processing; furthermore, recent work shows that the normalization extends to cognitive processes such as attention, multisensory integration, and decision making. This ubiquity reinforces the canonical nature of the normalization computation and highlights the importance of an algorithmic framework in linking biological mechanism and behavior.


As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.