Angel Ariel Caputi
American gymnotiformes and African mormyriformes have evolved an active sensory system using a self-generated electric field as a carrier of signals. Objects polarized by the discharge of a specialized electric organ project their images on the skin where electroreceptors tuned to the time course of the self-generated field transduce local signals carrying information about impedance, shape, size, and location of objects, as well as electrocommunication messages, and encode them as primary afferents trains of spikes. This system is articulated with other cutaneous systems (passive electroreception and mechanoception) as well as proprioception informing the shape of the fish’s body. Primary afferents project on the electrosensory lobe where electrosensory signals are compared with expectation signals resulting from the integration of recent past electrosensory, other sensory, and, in the case of mormyriformes, electro- and skeleton-motor corollary discharges. This ensemble of signals converges on the apical dendrites of the principal cells where a working memory of the recent past, and therefore predictable input, is continuously built up and updated as a pattern of synaptic weights. The efferent neurons of the electrosensory lobe also project to the torus and indirectly to other brainstem nuclei that implement automatic electro- and skeleton-motor behaviors. Finally, the torus projects via the preglomerular nucleus to the telencephalon where cognitive functions, including “electroperception” of shape-, size- and impedance-related features of objects, recognition of conspecifics, perception based decisions, learning, and abstraction, are organized.
Cynthia M. Harley and Mark K. Asplen
Annelid worms are simultaneously an interesting and difficult model system for understanding the evolution of animal vision. On the one hand, a wide variety of photoreceptor cells and eye morphologies are exhibited within a single phylum; on the other, annelid phylogenetics has been substantially re-envisioned within the last decade, suggesting the possibility of considerable convergent evolution. This article reviews the comparative anatomy of annelid visual systems within the context of the specific behaviors exhibited by these animals. Each of the major classes of annelid visual systems is examined, including both simple photoreceptor cells (including leech body eyes) and photoreceptive cells with pigment (trochophore larval eyes, ocellar tubes, complex eyes); meanwhile, behaviors examined include differential mobility and feeding strategies, similarities (or differences) in larval versus adult visual behaviors within a species, visual signaling, and depth sensing. Based on our review, several major trends in the comparative morphology and ethology of annelid vision are highlighted: (1) eye complexity tends to increase with mobility and higher-order predatory behavior; (2) although they have simple sensors these can relay complex information through large numbers or multimodality; (3) polychaete larval and adult eye morphology can differ strongly in many mobile species, but not in many sedentary species; and (4) annelids exhibiting visual signaling possess even more complex visual systems than expected, suggesting the possibility that complex eyes can be simultaneously well adapted to multiple visual tasks.
Jeffrey R. Holt and Gwenaëlle S.G. Géléoc
The organs of the vertebrate inner ear respond to a variety of mechanical stimuli: semicircular canals are sensitive to angular velocity, the saccule and utricle respond to linear acceleration (including gravity), and the cochlea is sensitive to airborne vibration, or sound. The ontogenically related lateral line organs, spaced along the sides of aquatic vertebrates, sense water movement. All these organs have a common receptor cell type, which is called the hair cell, for the bundle of enlarged microvilli protruding from its apical surface. In different organs, specialized accessory structures serve to collect, filter, and then deliver these physical stimuli to the hair bundles. The proximal stimulus for all hair cells is deflection of the mechanosensitive hair bundle. Hair cells convert mechanical information contained within the temporal pattern of hair bundle deflections into electrical signals, which they transmit to the brain for interpretation.
Cynthia F. Moss
Echolocating bats have evolved an active sensing system, which supports 3D perception of objects in the surroundings and permits spatial navigation in complete darkness. Echolocating animals produce high frequency sounds and use the arrival time, intensity, and frequency content of echo returns to determine the distance, direction, and features of objects in the environment. Over 1,000 species of bats echolocate with signals produced in their larynges. They use diverse sonar signal designs, operate in habitats ranging from tropical rain forest to desert, and forage for different foods, including insects, fruit, nectar, small vertebrates, and even blood. Specializations of the mammalian auditory system, coupled with high frequency hearing, enable spatial imaging by echolocation in bats. Specifically, populations of neurons in the bat central nervous system respond selectively to the direction and delay of sonar echoes. In addition, premotor neurons in the bat brain are implicated in the production of sonar calls, along with movement of the head and ears. Audio-motor circuits, within and across brain regions, lay the neural foundation for acoustic orientation by echolocation in bats.
Age-related hearing loss affects over half of the elderly population, yet it remains poorly understood. Natural aging can cause the input to the brain from the cochlea to be progressively compromised in most individuals, but in many cases the cochlea has relatively normal sensitivity and yet people have an increasingly difficult time processing complex auditory stimuli. The two main deficits are in sound localization and temporal processing, which lead to poor speech perception. Animal models have shown that there are multiple changes in the brainstem, midbrain, and thalamic auditory areas as a function of age, giving rise to an alteration in the excitatory/inhibitory balance of these neurons. This alteration is manifest in the cerebral cortex as higher spontaneous and driven firing rates, as well as broader spatial and temporal tuning. These alterations in cortical responses could underlie the hearing and speech processing deficits that are common in the aged population.
Thad E. Wilson and Kristen Metzler-Wilson
Thermoregulation is a key physiologic homeostatic process and is subdivided into autonomic, behavioral, and adaptive divisions. Autonomic thermoregulation is a neural process related to the sympathetic and parasympathetic nervous systems. Autonomic thermoregulation is controlled at the subcortical level to alter physiologic processes of heat production and loss to maintain internal temperature. Mammalian, including human, autonomic responses to acute heat or cold stresses are dependent on environmental conditions and species genotype and phenotype, but many similarities exist. Responses to an acute heat stress begin with the sensation of heat, leading to central processing of the information and sympathetic responses via end organs, which can include sweat glands, vasculature, and airway and cardiac tissues. Responses to an acute cold stress begin with the sensation of cold, which leads to central processing of the information and sympathetic responses via end organs, which can include skeletal and piloerector muscles, brown adipose tissue, vasculature, and cardiac tissue. These autonomic responses allow homeostasis of internal temperature to be maintained across a wide range of external temperatures for most mammals, including humans. At times, uncompensable thermal challenges occur that can be maintained for only limited periods of time before leading to pathophysiologic states of hyperthermia or hypothermia.
Paul E. Nachtigall
Toothed whales and dolphins, odontocete cetaceans, produce very loud biosonar sounds in order to navigate and to locate and catch their prey of fish and squid. Underwater biosonar was not discovered until after 1950, but the initial experiments demonstrated a unique sensory modality that could find small targets far away and distinguish between objects buried in mud that differed only by the metal from which they were made. Dolphins determine the distance to their prey by evaluating very small time differences between the outgoing signal and the echo return. The type of outgoing signal varies greatly from low frequency, explosively loud sperm whale clicks, to frequency modulated mid-frequency beaked whale sounds, to very high frequency (over 100 kHz) harbor porpoise signals. All appear to be made by specialized pneumatic phonic lips closely connected to sound projecting fatty melons that focus sound before sending out narrow echolocation sound beams. The frequency of most hearing is matched to echolocation, with the areas of best hearing of the animals being the areas of principal outgoing signal frequency. The sensation levels of hearing are under the animal’s control with “automatic gain control” operating to assure the best hearing of the echo returns. Angular localization of the bottlenose dolphins, for discriminating the minimum audible angles of clicks, is less than one degree in both the horizontal and vertical directions. This remarkable localization performance has yet to be fully explained, but new hypotheses of gular pathways, shaded receiver models, and internal pinnae may provide some explanations as a theory of auditory localization in the odontocetes develops.
Colin J. Saldanha
Since the early 1980s, evidence suggesting that the vertebrate brain is a rich source of steroid hormones has been decisive and extensive. This evidence includes data from many vertebrate species and describes almost every enzyme necessary for the conversion of cholesterol to androgens and estrogens. In contrast, the behavioral relevance of neurosteroidogenesis is more equivocal and mysterious. Nonetheless, the presence of a limited number of steroidogenic enzymes in the brain of a few species has clearly been linked to reliable behavioral phenotype.
Douglas K. Reilly and Jagan Srinivasan
To survive, animals must properly sense their surrounding environment. The types of sensation that allow for detecting these changes can be categorized as tactile, thermal, aural, or olfactory. Olfaction is one of the most primitive senses, involving the detection of environmental chemical cues. Organisms must sense and discriminate between abiotic and biogenic cues, necessitating a system that can react and respond to changes quickly. The nematode, Caenorhabditis elegans, offers a unique set of tools for studying the biology of olfactory sensation.
The olfactory system in C. elegans is comprised of 14 pairs of amphid neurons in the head and two pairs of phasmid neurons in the tail. The male nervous system contains an additional 89 neurons, many of which are exposed to the environment and contribute to olfaction. The cues sensed by these olfactory neurons initiate a multitude of responses, ranging from developmental changes to behavioral responses. Environmental cues might initiate entry into or exit from a long-lived alternative larval developmental stage (dauer), or pheromonal stimuli may attract sexually mature mates, or repel conspecifics in crowded environments. C. elegans are also capable of sensing abiotic stimuli, exhibiting attraction and repulsion to diverse classes of chemicals. Unlike canonical mammalian olfactory neurons, C. elegans chemosensory neurons express more than one receptor per cell. This enables detection of hundreds of chemical structures and concentrations by a chemosensory nervous system with few cells. However, each neuron detects certain classes of olfactory cues, and, combined with their synaptic pathways, elicit similar responses (i.e., aversive behaviors). The functional architecture of this chemosensory system is capable of supporting the development and behavior of nematodes in a manner efficient enough to allow for the genus to have a cosmopolitan distribution.
Yaniv Cohen, Emmanuelle Courtiol, Regina M. Sullivan, and Donald A. Wilson
Odorants, inhaled through the nose or exhaled from the mouth through the nose, bind to receptors on olfactory sensory neurons. Olfactory sensory neurons project in a highly stereotyped fashion into the forebrain to a structure called the olfactory bulb, where odorant-specific spatial patterns of neural activity are evoked. These patterns appear to reflect the molecular features of the inhaled stimulus. The olfactory bulb, in turn, projects to the olfactory cortex, which is composed of multiple sub-units including the anterior olfactory nucleus, the olfactory tubercle, the cortical nucleus of the amygdala, the anterior and posterior piriform cortex, and the lateral entorhinal cortex. Due to differences in olfactory bulb inputs, local circuitry and other factors, each of these cortical sub-regions appears to contribute to different aspects of the overall odor percept. For example, there appears to be some spatial organization of olfactory bulb inputs to the cortical nucleus of the amygdala, and this region may be involved in the expression of innate odor hedonic preferences. In contrast, the olfactory bulb projection to the piriform cortex is highly distributed and not spatially organized, allowing the piriform to function as a combinatorial, associative array, producing the emergence of experience-dependent odor-objects (e.g., strawberry) from the molecular features extracted in the periphery. Thus, the full perceptual experience of an odor requires involvement of a large, highly dynamic cortical network.
Tim C. Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte
The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.
Quentin Gaudry and Jonathan Schenk
Olfactory systems are tasked with converting the chemical environment into electrical signals that the brain can use to optimize behaviors such as navigating towards resources, finding mates, or avoiding danger. Drosophila melanogaster has long served as a model system for several attributes of olfaction. Such features include sensory coding, development, and the attempt to link sensory perception to behavior. The strength of Drosophila as a model system for neurobiology lies in the myriad of genetic tools made available to the experimentalist, and equally importantly, the numerical reduction in cell numbers within the olfactory circuit. Modern techniques have recently made it possible to target nearly all cell types in the antennal lobe to directly monitor their physiological activity or to alter their expression of endogenous proteins or transgenes.
Jon H. Kaas
The neocortex is a part of the forebrain of mammals that is an innovation of mammal-like “reptilian” synapsid ancestors of early mammals. This neocortex emerged from a small region of dorsal cortex that was present in earlier ancestors and is still found in the forebrain of present-day reptiles. Instead of the thick structure of six layers of cells (five layers) and fibers (one layer) of neocortex of mammals, the dorsal cortex was characterized by a single layer of pyramidal neurons and a scattering of small, largely inhibitory neurons. In reptiles, the dorsal cortex is dominated by visual inputs, with outputs that relate to behavior and memory. The thicker neocortex of six layers in early mammals was already divided into a number of functionally specialized zones called cortical areas that were predominantly sensory in function, while relating to important aspects of motor behavior via subcortical projections. These early sensorimotor areas became modified in various ways as different branches of the mammalian radiation evolved, and neocortex often increased in size and the number of cortical areas, likely by the process of specializations within areas that subdivided areas. At least some areas, perhaps most, subdivided in another way by evolving two or more alternating types of small regions of different functional specializations, now referred to as cortical modules or columns. The specializations within and across cortical areas included those in the sizes of neurons and the extents of their processes, the dendrites and axons, and thus connections with other neurons. As a result, the neocortex of present-day mammals varies greatly within and across phylogenetically related groups (clades), while retaining basic features of organization from early ancestral mammals. In a number of present-day (extant) mammals, brains are relatively small and have little neocortex, with few areas and little structural differentiation, thus resembling early mammals. Other small mammals with little neocortex have specialized some part via selective enlargement and structural modifications to promote certain sensory abilities. Other mammals have a neocortex that is moderately to greatly expanded, with more cortical areas directly related to sensory processing and cognition and memory. The human brain is extreme in this way by having more neocortex in proportion to the rest of the brain, more cortical neurons, and likely more cortical areas.
Tom Baden, Timm Schubert, Philipp Berens, and Thomas Euler
Visual processing begins in the retina—a thin, multilayered neuronal tissue lining the back of the vertebrate eye. The retina does not merely read out the constant stream of photons impinging on its dense array of photoreceptor cells. Instead it performs a first, extensive analysis of the visual scene, while constantly adapting its sensitivity range to the input statistics, such as the brightness or contrast distribution. The functional organization of the retina abides to several key organizational principles. These include overlapping and repeating instances of both divergence and convergence, constant and dynamic range-adjustments, and (perhaps most importantly) decomposition of image information into parallel channels. This is often referred to as “parallel processing.” To support this, the retina features a large diversity of neurons organized in functionally overlapping microcircuits that typically uniformly sample the retinal surface in a regular mosaic. Ultimately, each circuit drives spike trains in the retina’s output neurons, the retinal ganglion cells. Their axons form the optic nerve to convey multiple, distinctive, and often already heavily processed views of the world to higher visual centers in the brain.
From an experimental point of view, the retina is a neuroscientist’s dream. While part of the central nervous system, the retina is largely self-contained, and depending on the species, it receives little feedback from downstream stages. This means that the tissue can be disconnected from the rest of the brain and studied in a dish for many hours without losing its functional integrity, all while retaining excellent experimental control over the exclusive natural network input: the visual stimulus. Once removed from the eyecup, the retina can be flattened, thus its neurons are easily accessed optically or using visually guided electrodes. Retinal tiling means that function studied at any one place can usually be considered representative for the entire tissue. At the same time, species-dependent specializations offer the opportunity to study circuits adapted to different visual tasks: for example, in case of our fovea, high-acuity vision. Taken together, today the retina is amongst the best understood complex neuronal tissues of the vertebrate brain.
Tatyana O. Sharpee
Sensory systems exist to provide an organism with information about the state of the environment that can be used to guide future actions and decisions. Remarkably, two conceptually simple yet general theorems from information theory can be used to evaluate the performance of any sensory system. One theorem states that there is a minimal amount of energy that an organism has to spend in order to capture a given amount of information about the environment. The second theorem states that the maximum rate with which the organism can acquire resources from the environment, relative to its competitors, is limited by the information this organism collects about the environment, also relative to its competitors.
These two theorems provide a scaffold for formulating and testing general principles of sensory coding but leave unanswered many important practical questions of implementation in neural circuits. These implementation questions have guided thinking in entire subfields of sensory neuroscience, and include: What features in the sensory environment should be measured? Given that we make decisions on a variety of time scales, how should one solve trade-offs between making simpler measurements to guide minimal decisions vs. more elaborate sensory systems that have to overcome multiple delays between sensation and action. Once we agree on the types of features that are important to represent, how should they be represented? How should resources be allocated between different stages of processing, and where is the impact of noise most damaging? Finally, one should consider trade-offs between implementing a fixed strategy vs. an adaptive scheme that readjusts resources based on current needs. Where adaptation is considered, under what conditions does it become optimal to switch strategies? Research over the past 60 years has provided answers to almost all of these questions but primarily in early sensory systems. Joining these answers into a comprehensive framework is a challenge that will help us understand who we are and how we can make better use of limited natural resources.
Gerald H. Jacobs
Color is a central feature of human perceptual experience where it functions as a critical component in the detection, identification, evaluation, placement, and appreciation of objects in the visual world. Its role is significantly enhanced by the fact that humans evolved a dimension of color vision beyond that available to most other mammals. Many fellow primates followed a similar path and in recent years the basic mechanisms that support color vision—the opsin genes, photopigments, cone signals, and central processing—have been the subjects of hundreds of investigations. Because of the tight linkage between opsin gene structure and the spectral sensitivity of cone photopigments, it is possible to trace pathways along which color vision may have evolved in primates. In turn, such information allows the development of hypotheses about the nature of color vision and its utility in nonhuman primates. These hypotheses are being critically evaluated in field studies where primates solve visual problems in the presence of the full panoply of photic cues. The intent of this research is to determine which aspects of these cues are critically linked to color vision and how their presence facilitates, impedes, or fails to influence the solutions. These investigations are challenging undertakings and the emerging literature is replete with contradictory conclusions. But steady progress is being made and it appears that (a) some of the original ideas about there being a restricted number of tasks for which color vision might be optimally utilized by nonhuman primates (e. g., fruit harvest) were too simplistic and (b) depending on circumstances that can include both features of proximate visual stimuli (spectral cues, luminance cues, size cues, motion cues, overall light levels) and situational variables (social cues, developmental status, species-specific traits) the utilization of color vision by nonhuman primates is apt to be complex and varied.
Navigation is the ability of animals to move through their environment in a planned manner. Different from directed but reflex-driven movements, it involves the comparison of the animal’s current heading with its intended heading (i.e., the goal direction). When the two angles don’t match, a compensatory steering movement must be initiated. This basic scenario can be described as an elementary navigational decision. Many elementary decisions chained together in specific ways form a coherent navigational strategy. With respect to navigational goals, there are four main forms of navigation: explorative navigation (exploring the environment for food, mates, shelter, etc.); homing (returning to a nest); straight-line orientation (getting away from a central place in a straight line); and long-distance migration (seasonal long-range movements to a location such as an overwintering place). The homing behavior of ants and bees has been examined in the most detail. These insects use several strategies to return to their nest after foraging, including path integration, route following, and, potentially, even exploit internal maps. Independent of the strategy used, insects can use global sensory information (e.g., skylight cues), local cues (e.g., visual panorama), and idiothetic (i.e., internal, self-generated) cues to obtain information about their current and intended headings.
How are these processes controlled by the insect brain? While many unanswered questions remain, much progress has been made in recent years in understanding the neural basis of insect navigation. Neural pathways encoding polarized light information (a global navigational cue) target a brain region called the central complex, which is also involved in movement control and steering. Being thus placed at the interface of sensory information processing and motor control, this region has received much attention recently and emerged as the navigational “heart” of the insect brain. It houses an ordered array of head-direction cells that use a wide range of sensory information to encode the current heading of the animal. At the same time, it receives information about the movement speed of the animal and thus is suited to compute the home vector for path integration. With the help of neurons following highly stereotypical projection patterns, the central complex theoretically can perform the comparison of current and intended heading that underlies most navigation processes. Examining the detailed neural circuits responsible for head-direction coding, intended heading representation, and steering initiation in this brain area will likely lead to a solid understanding of the neural basis of insect navigation in the years to come.
Synaptic connections in the brain can change their strength in response to patterned activity. This ability of synapses is defined as synaptic plasticity. Long lasting forms of synaptic plasticity, long-term potentiation (LTP), and long-term depression (LTD), are thought to mediate the storage of information about stimuli or features of stimuli in a neural circuit. Since its discovery in the early 1970s, synaptic plasticity became a central subject of neuroscience, and many studies centered on understanding its mechanisms, as well as its functional implications.
Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.
Tyler S. Manning and Kenneth H. Britten
The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.