1-2 of 2 Results  for:

  • Keywords: neurophysiology x
  • Computational Neuroscience x
Clear all

Article

Normalization Principles in Computational Neuroscience  

Kenway Louie and Paul W. Glimcher

A core question in systems and computational neuroscience is how the brain represents information. Identifying principles of information coding in neural circuits is critical to understanding brain organization and function in sensory, motor, and cognitive neuroscience. This provides a conceptual bridge between the underlying biophysical mechanisms and the ultimate behavioral goals of the organism. Central to this framework is the question of computation: what are the relevant representations of input and output, and what algorithms govern the input-output transformation? Remarkably, evidence suggests that certain canonical computations exist across different circuits, brain regions, and species. Such computations are implemented by different biophysical and network mechanisms, indicating that the unifying target of conservation is the algorithmic form of information processing rather than the specific biological implementation. A prime candidate to serve as a canonical computation is divisive normalization, which scales the activity of a given neuron by the activity of a larger neuronal pool. This nonlinear transformation introduces an intrinsic contextual modulation into information coding, such that the selective response of a neuron to features of the input is scaled by other input characteristics. This contextual modulation allows the normalization model to capture a wide array of neural and behavioral phenomena not captured by simpler linear models of information processing. The generality and flexibility of the normalization model arises from the normalization pool, which allows different inputs to directly drive and suppress a given neuron, effectively separating information that drives excitation and contextual modulation. Originally proposed to describe responses in early visual cortex, normalization has been widely documented in different brain regions, hierarchical levels, and modalities of sensory processing; furthermore, recent work shows that the normalization extends to cognitive processes such as attention, multisensory integration, and decision making. This ubiquity reinforces the canonical nature of the normalization computation and highlights the importance of an algorithmic framework in linking biological mechanism and behavior.

Article

Visual Shape and Object Perception  

Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.