Kathleen E. Cullen
As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.
Justin D. Lieber and Sliman J. Bensmaia
The ability to identify tactile objects depends in part on the perception of their surface microstructure and material properties. Texture perception can, on a first approximation, be described by a number of nameable perceptual axes, such as rough/smooth, hard/soft, sticky/slippery, and warm/cool, which exist within a complex perceptual space. The perception of texture relies on two different neural streams of information: Coarser features, measured in millimeters, are primarily encoded by spatial patterns of activity across one population of tactile nerve fibers, while finer features, down to the micron level, are encoded by finely timed temporal patterns within two other populations of afferents. These two streams of information ascend the somatosensory neuraxis and are eventually combined and further elaborated in the cortex to yield a high-dimensional representation that accounts for our exquisite and stable perception of texture.
Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds.
Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.
Judith M. Ford, Holly K. Hamilton, and Alison Boos
Auditory verbal hallucinations (AVH), also referred to as “hearing voices,” are vivid perceptions of speech that occur in the absence of any corresponding external stimulus but seem very real to the voice hearer. They are experienced by the majority of people with schizophrenia, less frequently in other psychiatric and neurological conditions, and are relatively rare in the general population. Because antipsychotic medications are not always successful in reducing the severity or frequency of AVH, a better understanding is needed of their neurobiological basis, which may ultimately lead to more precise treatment targets.
What voices say and how the voices sound, or their phenomenology, varies widely within and across groups of people who hear them. In help-seeking populations, such as people with schizophrenia, the voices tend to be threatening and menacing, typically spoken in a non-self-voice, often commenting and sometimes commanding the voice hearers to do things they would not otherwise do. In psychotic populations, voices differ from normal inner speech by being unbidden and unintended, co-opting the voice hearer’s attention. In healthy voice-hearing populations, voices are not typically distressing nor disabling, and are sometimes comforting and reassuring. Regardless of content and valence, voices tend to activate some speech and language areas of the brain. Efforts to silence these brain areas with neurostimulation have had mixed success in reducing the frequency and salience of voices. Progress with this treatment approach would likely benefit from more precise anatomical targets and more precisely dosed neurostimulation.
Neural mechanisms that may underpin the experience of voices are being actively investigated and include mechanisms enabling context-based predictions and distinctions between experiences coming from self and other. Both these mechanisms can be studied in non-human animal “models” and both can provide new anatomical targets for neurostimulation.
Much progress has been made in unraveling the mechanisms that underlie the transition from acute to chronic pain. Traditional beliefs are being replaced by novel, more powerful concepts that consider the mutual interplay of neuronal and non-neuronal cells in the nervous system during the pathogenesis of chronic pain. The new focus is on the role of neuroinflammation for neuroplasticity in nociceptive pathways and for the generation, amplification, and mislocation of pain. The latest insights are reviewed here and provide a basis for understanding the interdependence of chronic pain and its comorbidities. The new concepts will guide the search for future therapies to prevent and reverse chronic pain.
Long-term changes in the properties and functions of nerve cells, including changes in synaptic strength, membrane excitability, and the effects of inhibitory neurotransmitters, can result from a wide variety of conditions. In the nociceptive system, painful stimuli, peripheral inflammation, nerve injuries, the use of or withdrawal from opioids—all can lead to enhanced pain sensitivity, to the generation of pain, and/or to the spread of pain to unaffected sites of the body. Non-neuronal cells, especially microglia and astrocytes, contribute to changes in nociceptive processing. Recent studies revealed not only that glial cells support neuroplasticity but also that their activation can trigger long-term changes in the nociceptive system.
Color perception in macaque monkeys and humans depends on the visually evoked activity in three cone photoreceptors and on neuronal post-processing of cone signals. Neuronal post-processing of cone signals occurs in two stages in the pathway from retina to the primary visual cortex. The first stage, in in P (midget) ganglion cells in the retina, is a single-opponent subtractive comparison of the cone signals. The single-opponent computation is then sent to neurons in the Parvocellular layers of the Lateral Geniculate Nucleus (LGN), the main visual nucleus of the thalamus. The second stage of processing of color-related signals is in the primary visual cortex, V1, where multiple comparisons of the single-opponent signals are made. The diversity of neuronal interactions in V1cortex causes the cortical color cells to be subdivided into classes of single-opponent cells and double-opponent cells. Double-opponent cells have visual properties that can be used to explain most of the phenomenology of color perception of surface colors; they respond best to color edges and spatial patterns of color. Single opponent cells, in retina, LGN, and V1, respond to color modulation over their receptive fields and respond best to color modulation over a large area in the visual field.
Mindaugas Mitkus, Simon Potier, Graham R. Martin, Olivier Duriez, and Almut Kelber
Diurnal raptors (birds of the orders Accipitriformes and Falconiformes), renowned for their extraordinarily sharp eyesight, have fascinated humans for centuries. The high visual acuity in some raptor species is possible due to their large eyes, both in relative and absolute terms, and a high density of cone photoreceptors. Some large raptors, such as wedge-tailed eagles and the Old World vultures, have visual acuities twice as high as humans and six times as high as ostriches—the animals with the largest terrestrial eyes. The raptor retina has rods, double cones, and four spectral types of single cones. The highest density of single cones occurs in one or two specialized retinal regions: the foveae, where, at least in some species, rods and double cones are absent. The deep central fovea allows for the highest acuity in the lateral visual field that is probably used for detecting prey from a large distance. Pursuit-hunting raptors have a second, shallower, temporal fovea that allows for sharp vision in the frontal field of view. Scavenging carrion eaters do not possess a temporal fovea that may indicate different needs in foraging behavior. Moreover, pursuit-hunting and scavenging raptors also differ in configuration of visual fields, with a more extensive field of view in scavengers.
The eyes of diurnal raptors, unlike those of most other birds, are not very sensitive to ultraviolet light, which is strongly absorbed by their cornea and lens. As a result of the low density of rods, and the narrow and densely packed single cones in the central fovea, the visual performance of diurnal raptors drops dramatically as light levels decrease. These and other visual properties underpin prey detection and pursuit and show how these birds’ vision is adapted to make them successful diurnal predators.
Thomas F. Mathejczyk and Mathias F. Wernet
Evolution has produced vast morphological and behavioral diversity amongst insects, including very successful adaptations to a diverse range of ecological niches spanning the invasion of the sky by flying insects, the crawling lifestyle on (or below) the earth, and the (semi-)aquatic life on (or below) the water surface. Developing the ability to extract a maximal amount of useful information from their environment was crucial for ensuring the survival of many insect species. Navigating insects rely heavily on a combination of different visual and non-visual cues to reliably orient under a wide spectrum of environmental conditions while avoiding predators. The pattern of linearly polarized skylight that results from scattering of sunlight in the atmosphere is one important navigational cue that many insects can detect. Here we summarize progress made toward understanding how different insect species sense polarized light. First, we present behavioral studies with “true” insect navigators (central-place foragers, like honeybees or desert ants), as well as insects that rely on polarized light to improve more “basic” orientation skills (like dung beetles). Second, we provide an overview over the anatomical basis of the polarized light detection system that these insects use, as well as the underlying neural circuitry. Third, we emphasize the importance of physiological studies (electrophysiology, as well as genetically encoded activity indicators, in Drosophila) for understanding both the structure and function of polarized light circuitry in the insect brain. We also discuss the importance of an alternative source of polarized light that can be detected by many insects: linearly polarized light reflected off shiny surfaces like water represents an important environmental factor, yet the anatomy and physiology of underlying circuits remain incompletely understood.
Mathew H. Evans, Michaela S.E. Loft, Dario Campagner, and Rasmus S. Petersen
Whiskers (vibrissae) are prominent on the snout of many mammals, both terrestrial and aquatic. The defining feature of whiskers is that they are rooted in large follicles with dense sensory innervation, surrounded by doughnut-shaped blood sinuses. Some species, including rats and mice, have elaborate muscular control of their whiskers and explore their environment by making rhythmic back-and-forth “whisking” movements. Whisking movements are purposefully modulated according to specific behavioral goals (“active sensing”). The basic whisking rhythm is controlled by a premotor complex in the intermediate reticular formation.
Primary whisker neurons (PWNs), with cell bodies in the trigeminal ganglion, innervate several classes of mechanoreceptive nerve endings in the whisker follicle. Mechanotransduction involving Piezo2 ion channels establishes the fundamental physical signals that the whiskers communicate to the brain. PWN spikes are triggered by mechanical forces associated with both the whisking motion itself and whisker-object contact. Whisking is associated with inertial and muscle contraction forces that drive PWN activity. Whisker-object contact causes whiskers to bend, and PWN activity is driven primarily by the associated rotatory force (“bending moment”).
Sensory signals from the PWNs are routed to many parts of the hindbrain, midbrain, and forebrain. Parallel ascending pathways transmit information about whisker forces to sensorimotor cortex. At each brainstem, thalamic, and cortical level of these pathways, there are one or more maps of the whisker array, consisting of cell clusters (“barrels” in the primary somatosensory cortex) whose spatial arrangement precisely mirrors that of the whiskers on the snout. However, the overall architecture of the whisker-responsive regions of the brain system is best characterized by multilevel sensory-motor feedback loops. Its intriguing biology, in combination with advantageous properties as a model sensory system, has made the whisker system the platform for seminal insights into brain function.
Yeonjoo Yoo and Fabrizio Gabbiani
Computational modeling is essential to understand how the complex dendritic structure and membrane properties of a neuron process input signals to generate output signals. Compartmental models describe how inputs, such as synaptic currents, affect a neuron’s membrane potential and produce outputs, such as action potentials, by converting membrane properties into the components of an electrical circuit. The simplest such model consists of a single compartment with a leakage conductance which represents a neuron having spatially uniform membrane potential and a constant conductance summarizing the combined effect of every ion flowing across the neuron’s membrane. The Hodgkin-Huxley model introduces two additional active channels; the sodium channel and the delayed rectifier potassium channel whose associated conductances change depending on the membrane potential and that are described by an additional set of three nonlinear differential equations. Since its conception in 1952, many kinds of active channels have been discovered with a variety of characteristics that can successfully be modeled within the same framework. As the membrane potential varies spatially in a neuron, the next refinement consists in describing a neuron as an electric cable to account for membrane potential attenuation and signal propagation along dendritic or axonal processes. A discrete version of the cable equation results in compartments with possibly different properties, such as different types of ion channels or spatially varying maximum conductances to model changes in channel densities. Branching neural processes such as dendrites can be modeled with the cable equation by considering the junctions of cables with different radii and electrical properties. Single neuron computational models are used to investigate a variety of topics and reveal insights that cannot be evidenced directly by experimental observation. Studies on action potential initiation and on synaptic integration provide prototypical examples illustrating why computational models are essential. Modeling action potential initiation constrains the localization and density of channels required to reproduce experimental observations, while modeling synaptic integration sheds light on the interaction between the morphological and physiological characteristics of dendrites. Finally, reduced compartmental models demonstrate how a simplified morphological structure supplemented by a small number of ion channel-related variables can provide clear explanations about complex intracellular membrane potential dynamics.
Andrew J. Parker
Humans and some animals can use their two eyes in cooperation to detect and discriminate parts of the visual scene based on depth. Owing to the horizontal separation of the eyes, each eye obtains a slightly different view of the scene in front of the head. These small differences are processed by the nervous system to generate a sense of binocular depth. As humans, we experience an impression of solidity that is fully three-dimensional; this impression is called stereopsis and is what we appreciate when we watch a 3D movie or look into a stereoscopic viewer. While the basic perceptual phenomena of stereoscopic vision have been known for some time, it is mainly within the last 50 years that we have gained an understanding of how the nervous system delivers this sense of depth. This period of research began with the identification of neuronal signals for binocular depth in the primary visual cortex. Building on that finding, subsequent work has traced the signaling pathways for binocular stereoscopic depth forward into extrastriate cortex and further on into cortical areas concerning with sensorimotor integration. Within these pathways, neurons acquire sensitivity to more complex, higher order aspects of stereoscopic depth. Signals relating to the relative depth of visual features can be identified in the extrastriate cortex, which is a form of selectivity not found in the primary visual cortex. Over the same time period, knowledge of the organization of binocular vision in animals that inhabit a wide diversity of ecological niches has substantially increased. The implications of these findings for developmental and adult plasticity of the visual nervous system and onset of the clinical condition of amblyopia are explored in this article. Amblyopic vision is associated with a cluster of different visual and oculomotor symptoms, but the loss of high-quality stereoscopic depth performance is one of the consistent clinical features. Understanding where and how those losses occur in the visual brain is an important goal of current research, for both scientific and clinical reasons.
Jose M. Alonso and Harvey A. Swadlow
The thalamocortical pathway is the main route of sensory information to the cerebral cortex. Vision, touch, hearing, taste, and balance all depend on the integrity of this pathway that connects the thalamic structures receiving sensory input with the cortical areas specialized in each sensory modality. Only the ancient sense of smell is independent of the thalamus, gaining access to cortex through more anterior routes. While the thalamocortical pathway targets different layers of the cerebral cortex, its main stream projects to the middle layers and has axon terminals that are dense, spatially restricted, and highly specific in their connections. The remarkable specificity of these thalamocortical connections allows for a precise reconstruction of the sensory dimensions that need to be most finely sampled, such as spatial acuity in vision and sound frequency in hearing. The thalamic axon terminals also segregate topographically according to their stimulus preferences, providing a simple principle to build cortical sensory maps: neighboring values in sensory space are represented by neighboring points within the cortex.
Thalamocortical processing is not static. It is continuously modulated by the brain stem and corticothalamic feedback based on the level of attention and alertness, and during sleep or general anesthesia. When alert, visual thalamic responses become stronger, more reliable, more sustained, more effective at sampling fast changes in the scene, and more linearly related to the stimulus. The high firing rates of the alert state make thalamocortical synapses chronically depressed and excitatory synaptic potentials less dependent on temporal history, improving even further the linear relation between stimulus and response. In turn, when alertness wanes, the thalamus reduces its firing rate, and starts generating spike bursts that drive large postsynaptic responses and keep the cortex responsive to sudden stimulus changes.
Susan C. P. Renn and Nadia Aubin-Horth
Several species show diversity in reproductive patterns that result from phenotypic plasticity. This reproductive plasticity is found for example in mate choice, parental care, reproduction suppression, reproductive tactics, sex role and sex reversal. Studying the genome-wide changes in transcription that are associated with these plastic phenotypes will help answer several questions, including those regarding which genes are expressed and where they are expressed when an individual is faced with a reproductive choice, as well as those regarding whether males and females have the same brain genomic signature when they express the same behaviors, or if they activate sex-specific molecular pathways to output similar behavioral responses. The comparative approach of studying transcription in a wide array of species allows us to uncover genes, pathways, and biological functions that are repeatedly co-opted (“genetic toolkit”) as well as those that are unique to a particular system (“genomic signature”). Additionally, by quantifying the transcriptome, a labile trait, using time series has the potential to uncover the causes and consequences of expressing one plastic phenotype or another. There are of course gaps in our knowledge of reproductive plasticity, but no shortage of possibilities for future directions.
Sabine Kastner and Timothy J. Buschman
Natural scenes are cluttered and contain many objects that cannot all be processed simultaneously. Due to this limited processing capacity, neural mechanisms are needed to selectively enhance the information that is most relevant to one’s current behavior and to filter unwanted information. We refer to these mechanisms as “selective attention.” Attention has been studied extensively at the behavioral level in a variety of paradigms, most notably, Treisman’s visual search and Posner’s paradigm. These paradigms have also provided the basis for studies directed at understanding the neural mechanisms underlying attentional selection, both in the form of neuroimaging studies in humans and intracranial electrophysiology in non-human primates. The selection of behaviorally relevant information is mediated by a large-scale network that includes regions in all major lobes as well as subcortical structures. Attending to a visual stimulus modulates processing across the visual processing hierarchy with stronger effects in higher-order areas. Current research is aimed at characterizing the functions of the different network nodes as well as the dynamics of their functional connectivity.
Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina
Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.