Navigation is the ability of animals to move through their environment in a planned manner. Different from directed but reflex-driven movements, it involves the comparison of the animal’s current heading with its intended heading (i.e., the goal direction). When the two angles don’t match, a compensatory steering movement must be initiated. This basic scenario can be described as an elementary navigational decision. Many elementary decisions chained together in specific ways form a coherent navigational strategy. With respect to navigational goals, there are four main forms of navigation: explorative navigation (exploring the environment for food, mates, shelter, etc.); homing (returning to a nest); straight-line orientation (getting away from a central place in a straight line); and long-distance migration (seasonal long-range movements to a location such as an overwintering place). The homing behavior of ants and bees has been examined in the most detail. These insects use several strategies to return to their nest after foraging, including path integration, route following, and, potentially, even exploit internal maps. Independent of the strategy used, insects can use global sensory information (e.g., skylight cues), local cues (e.g., visual panorama), and idiothetic (i.e., internal, self-generated) cues to obtain information about their current and intended headings.
How are these processes controlled by the insect brain? While many unanswered questions remain, much progress has been made in recent years in understanding the neural basis of insect navigation. Neural pathways encoding polarized light information (a global navigational cue) target a brain region called the central complex, which is also involved in movement control and steering. Being thus placed at the interface of sensory information processing and motor control, this region has received much attention recently and emerged as the navigational “heart” of the insect brain. It houses an ordered array of head-direction cells that use a wide range of sensory information to encode the current heading of the animal. At the same time, it receives information about the movement speed of the animal and thus is suited to compute the home vector for path integration. With the help of neurons following highly stereotypical projection patterns, the central complex theoretically can perform the comparison of current and intended heading that underlies most navigation processes. Examining the detailed neural circuits responsible for head-direction coding, intended heading representation, and steering initiation in this brain area will likely lead to a solid understanding of the neural basis of insect navigation in the years to come.
Synaptic connections in the brain can change their strength in response to patterned activity. This ability of synapses is defined as synaptic plasticity. Long lasting forms of synaptic plasticity, long-term potentiation (LTP), and long-term depression (LTD), are thought to mediate the storage of information about stimuli or features of stimuli in a neural circuit. Since its discovery in the early 1970s, synaptic plasticity became a central subject of neuroscience, and many studies centered on understanding its mechanisms, as well as its functional implications.
Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.
Tyler S. Manning and Kenneth H. Britten
The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.
Kathleen E. Cullen
As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.
Asymmetry of bilateral visual and auditory sensors has functional advantages for depth visual perception and localization of auditory signals, respectively. In order to detect the spatial distribution of an odor, bilateral olfactory organs may compare side differences of odor intensity and timing by using a simultaneous sampling mechanism; alternatively, they may use a sequential sampling mechanism to compare spatial and temporal input detected by one or several chemosensors. Extensive research on strategies and mechanisms necessary for odor source localization has been focused mainly on invertebrates. Several recent studies in mammals such as moles, rodents, and humans suggest that there is an evolutionary advantage in using stereo olfaction for successful navigation towards an odor source. Smelling in stereo or a three-dimensional olfactory space may significantly reduce the time to locate an odor source; this quality provides instantaneous information for both foraging and predator avoidance. However, since mammals are capable of finding odor sources and tracking odor trails with one sensor side blocked, they may use an intriguing temporal mechanism to compare odor concentration from sniff to sniff. A particular focus of this article is attributed to differences between insects and mammals regarding the use of unilateral versus bilateral chemosensors for odor source localization.
Justin D. Lieber and Sliman J. Bensmaia
The ability to identify tactile objects depends in part on the perception of their surface microstructure and material properties. Texture perception can, on a first approximation, be described by a number of nameable perceptual axes, such as rough/smooth, hard/soft, sticky/slippery, and warm/cool, which exist within a complex perceptual space. The perception of texture relies on two different neural streams of information: Coarser features, measured in millimeters, are primarily encoded by spatial patterns of activity across one population of tactile nerve fibers, while finer features, down to the micron level, are encoded by finely timed temporal patterns within two other populations of afferents. These two streams of information ascend the somatosensory neuraxis and are eventually combined and further elaborated in the cortex to yield a high-dimensional representation that accounts for our exquisite and stable perception of texture.
Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds.
Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.
Judith M. Ford, Holly K. Hamilton, and Alison Boos
Auditory verbal hallucinations (AVH), also referred to as “hearing voices,” are vivid perceptions of speech that occur in the absence of any corresponding external stimulus but seem very real to the voice hearer. They are experienced by the majority of people with schizophrenia, less frequently in other psychiatric and neurological conditions, and are relatively rare in the general population. Because antipsychotic medications are not always successful in reducing the severity or frequency of AVH, a better understanding is needed of their neurobiological basis, which may ultimately lead to more precise treatment targets.
What voices say and how the voices sound, or their phenomenology, varies widely within and across groups of people who hear them. In help-seeking populations, such as people with schizophrenia, the voices tend to be threatening and menacing, typically spoken in a non-self-voice, often commenting and sometimes commanding the voice hearers to do things they would not otherwise do. In psychotic populations, voices differ from normal inner speech by being unbidden and unintended, co-opting the voice hearer’s attention. In healthy voice-hearing populations, voices are not typically distressing nor disabling, and are sometimes comforting and reassuring. Regardless of content and valence, voices tend to activate some speech and language areas of the brain. Efforts to silence these brain areas with neurostimulation have had mixed success in reducing the frequency and salience of voices. Progress with this treatment approach would likely benefit from more precise anatomical targets and more precisely dosed neurostimulation.
Neural mechanisms that may underpin the experience of voices are being actively investigated and include mechanisms enabling context-based predictions and distinctions between experiences coming from self and other. Both these mechanisms can be studied in non-human animal “models” and both can provide new anatomical targets for neurostimulation.
Much progress has been made in unraveling the mechanisms that underlie the transition from acute to chronic pain. Traditional beliefs are being replaced by novel, more powerful concepts that consider the mutual interplay of neuronal and non-neuronal cells in the nervous system during the pathogenesis of chronic pain. The new focus is on the role of neuroinflammation for neuroplasticity in nociceptive pathways and for the generation, amplification, and mislocation of pain. The latest insights are reviewed here and provide a basis for understanding the interdependence of chronic pain and its comorbidities. The new concepts will guide the search for future therapies to prevent and reverse chronic pain.
Long-term changes in the properties and functions of nerve cells, including changes in synaptic strength, membrane excitability, and the effects of inhibitory neurotransmitters, can result from a wide variety of conditions. In the nociceptive system, painful stimuli, peripheral inflammation, nerve injuries, the use of or withdrawal from opioids—all can lead to enhanced pain sensitivity, to the generation of pain, and/or to the spread of pain to unaffected sites of the body. Non-neuronal cells, especially microglia and astrocytes, contribute to changes in nociceptive processing. Recent studies revealed not only that glial cells support neuroplasticity but also that their activation can trigger long-term changes in the nociceptive system.
Tamar Makin and Plasticity Lab London
Phantom sensations are experienced by almost every person who has lost their hand in adulthood. This mysterious phenomenon spans the full range of bodily sensations, including the sense of touch, temperature, movement, and even the sense of wetness. For a majority of upper-limb amputees, these sensations will also be at times unpleasant, painful, and for some even excruciating to the point of debilitating, causing a serious clinical problem, termed phantom limb pain (PLP). Considering the sensory organs (the receptors in the skin, muscle or tendon) are physically missing, in order to understand the origins of phantom sensations and pain the potential causes must be studied at the level of the nervous system, and the brain in particular. This raises the question of what happens to a fully developed part of the brain that becomes functionally redundant (e.g. the sensorimotor hand area after arm amputation). Relatedly, what happens to the brain representation of a body part that becomes overused (e.g. the intact hand, on which most amputees heavily rely for completing daily tasks)? Classical studies in animals show that the brain territory in primary somatosensory cortex (S1) that was “freed up” due to input loss (hereafter deprivation) becomes activated by other body part representations, those neighboring the deprived cortex.
If neural resources in the deprived hand area get redistributed to facilitate the representation of other body parts following amputation, how does this process relate to persistent phantom sensation arising from the amputated hand? Subsequent work in humans, mostly with noninvasive neuroimaging and brain stimulation techniques, have expanded on the initial observations of cortical remapping in two important ways. First, research with humans allows us to study the perceptual consequence of remapping, particularly with regards to phantom sensations and pain. Second, by considering the various compensatory strategies amputees adopt in order to account for their disability, including overuse of their intact hand and learning to use an artificial limb, use-dependent plasticity can also be studied in amputees, as well as its relationship to deprivation-triggered plasticity. Both of these topics are of great clinical value, as these could inform clinicians how to treat PLP, and how to facilitate rehabilitation and prosthesis usage in particular. Moreover, research in humans provides new insight into the role of remapping and persistent representation in facilitating (or hindering) the realization of emerging technologies for artificial limb devices, with special emphasis on the role of embodiment. Together, this research affords a more comprehensive outlook at the functional consequences of cortical remapping in amputees’ primary sensorimotor cortex.
Color perception in macaque monkeys and humans depends on the visually evoked activity in three cone photoreceptors and on neuronal post-processing of cone signals. Neuronal post-processing of cone signals occurs in two stages in the pathway from retina to the primary visual cortex. The first stage, in in P (midget) ganglion cells in the retina, is a single-opponent subtractive comparison of the cone signals. The single-opponent computation is then sent to neurons in the Parvocellular layers of the Lateral Geniculate Nucleus (LGN), the main visual nucleus of the thalamus. The second stage of processing of color-related signals is in the primary visual cortex, V1, where multiple comparisons of the single-opponent signals are made. The diversity of neuronal interactions in V1cortex causes the cortical color cells to be subdivided into classes of single-opponent cells and double-opponent cells. Double-opponent cells have visual properties that can be used to explain most of the phenomenology of color perception of surface colors; they respond best to color edges and spatial patterns of color. Single opponent cells, in retina, LGN, and V1, respond to color modulation over their receptive fields and respond best to color modulation over a large area in the visual field.
Mindaugas Mitkus, Simon Potier, Graham R. Martin, Olivier Duriez, and Almut Kelber
Diurnal raptors (birds of the orders Accipitriformes and Falconiformes), renowned for their extraordinarily sharp eyesight, have fascinated humans for centuries. The high visual acuity in some raptor species is possible due to their large eyes, both in relative and absolute terms, and a high density of cone photoreceptors. Some large raptors, such as wedge-tailed eagles and the Old World vultures, have visual acuities twice as high as humans and six times as high as ostriches—the animals with the largest terrestrial eyes. The raptor retina has rods, double cones, and four spectral types of single cones. The highest density of single cones occurs in one or two specialized retinal regions: the foveae, where, at least in some species, rods and double cones are absent. The deep central fovea allows for the highest acuity in the lateral visual field that is probably used for detecting prey from a large distance. Pursuit-hunting raptors have a second, shallower, temporal fovea that allows for sharp vision in the frontal field of view. Scavenging carrion eaters do not possess a temporal fovea that may indicate different needs in foraging behavior. Moreover, pursuit-hunting and scavenging raptors also differ in configuration of visual fields, with a more extensive field of view in scavengers.
The eyes of diurnal raptors, unlike those of most other birds, are not very sensitive to ultraviolet light, which is strongly absorbed by their cornea and lens. As a result of the low density of rods, and the narrow and densely packed single cones in the central fovea, the visual performance of diurnal raptors drops dramatically as light levels decrease. These and other visual properties underpin prey detection and pursuit and show how these birds’ vision is adapted to make them successful diurnal predators.
Thomas F. Mathejczyk and Mathias F. Wernet
Evolution has produced vast morphological and behavioral diversity amongst insects, including very successful adaptations to a diverse range of ecological niches spanning the invasion of the sky by flying insects, the crawling lifestyle on (or below) the earth, and the (semi-)aquatic life on (or below) the water surface. Developing the ability to extract a maximal amount of useful information from their environment was crucial for ensuring the survival of many insect species. Navigating insects rely heavily on a combination of different visual and non-visual cues to reliably orient under a wide spectrum of environmental conditions while avoiding predators. The pattern of linearly polarized skylight that results from scattering of sunlight in the atmosphere is one important navigational cue that many insects can detect. Here we summarize progress made toward understanding how different insect species sense polarized light. First, we present behavioral studies with “true” insect navigators (central-place foragers, like honeybees or desert ants), as well as insects that rely on polarized light to improve more “basic” orientation skills (like dung beetles). Second, we provide an overview over the anatomical basis of the polarized light detection system that these insects use, as well as the underlying neural circuitry. Third, we emphasize the importance of physiological studies (electrophysiology, as well as genetically encoded activity indicators, in Drosophila) for understanding both the structure and function of polarized light circuitry in the insect brain. We also discuss the importance of an alternative source of polarized light that can be detected by many insects: linearly polarized light reflected off shiny surfaces like water represents an important environmental factor, yet the anatomy and physiology of underlying circuits remain incompletely understood.
Mathew H. Evans, Michaela S.E. Loft, Dario Campagner, and Rasmus S. Petersen
Whiskers (vibrissae) are prominent on the snout of many mammals, both terrestrial and aquatic. The defining feature of whiskers is that they are rooted in large follicles with dense sensory innervation, surrounded by doughnut-shaped blood sinuses. Some species, including rats and mice, have elaborate muscular control of their whiskers and explore their environment by making rhythmic back-and-forth “whisking” movements. Whisking movements are purposefully modulated according to specific behavioral goals (“active sensing”). The basic whisking rhythm is controlled by a premotor complex in the intermediate reticular formation.
Primary whisker neurons (PWNs), with cell bodies in the trigeminal ganglion, innervate several classes of mechanoreceptive nerve endings in the whisker follicle. Mechanotransduction involving Piezo2 ion channels establishes the fundamental physical signals that the whiskers communicate to the brain. PWN spikes are triggered by mechanical forces associated with both the whisking motion itself and whisker-object contact. Whisking is associated with inertial and muscle contraction forces that drive PWN activity. Whisker-object contact causes whiskers to bend, and PWN activity is driven primarily by the associated rotatory force (“bending moment”).
Sensory signals from the PWNs are routed to many parts of the hindbrain, midbrain, and forebrain. Parallel ascending pathways transmit information about whisker forces to sensorimotor cortex. At each brainstem, thalamic, and cortical level of these pathways, there are one or more maps of the whisker array, consisting of cell clusters (“barrels” in the primary somatosensory cortex) whose spatial arrangement precisely mirrors that of the whiskers on the snout. However, the overall architecture of the whisker-responsive regions of the brain system is best characterized by multilevel sensory-motor feedback loops. Its intriguing biology, in combination with advantageous properties as a model sensory system, has made the whisker system the platform for seminal insights into brain function.
Yeonjoo Yoo and Fabrizio Gabbiani
Computational modeling is essential to understand how the complex dendritic structure and membrane properties of a neuron process input signals to generate output signals. Compartmental models describe how inputs, such as synaptic currents, affect a neuron’s membrane potential and produce outputs, such as action potentials, by converting membrane properties into the components of an electrical circuit. The simplest such model consists of a single compartment with a leakage conductance which represents a neuron having spatially uniform membrane potential and a constant conductance summarizing the combined effect of every ion flowing across the neuron’s membrane. The Hodgkin-Huxley model introduces two additional active channels; the sodium channel and the delayed rectifier potassium channel whose associated conductances change depending on the membrane potential and that are described by an additional set of three nonlinear differential equations. Since its conception in 1952, many kinds of active channels have been discovered with a variety of characteristics that can successfully be modeled within the same framework. As the membrane potential varies spatially in a neuron, the next refinement consists in describing a neuron as an electric cable to account for membrane potential attenuation and signal propagation along dendritic or axonal processes. A discrete version of the cable equation results in compartments with possibly different properties, such as different types of ion channels or spatially varying maximum conductances to model changes in channel densities. Branching neural processes such as dendrites can be modeled with the cable equation by considering the junctions of cables with different radii and electrical properties. Single neuron computational models are used to investigate a variety of topics and reveal insights that cannot be evidenced directly by experimental observation. Studies on action potential initiation and on synaptic integration provide prototypical examples illustrating why computational models are essential. Modeling action potential initiation constrains the localization and density of channels required to reproduce experimental observations, while modeling synaptic integration sheds light on the interaction between the morphological and physiological characteristics of dendrites. Finally, reduced compartmental models demonstrate how a simplified morphological structure supplemented by a small number of ion channel-related variables can provide clear explanations about complex intracellular membrane potential dynamics.
Corinna Darian-Smith and Karen Fisher
Spinal cord injury (SCI) affects well over a million people in the United States alone, and its personal and societal costs are huge. This article provides a current overview of the organization of somatosensory and motor pathways, in the context of hand/paw function in nonhuman primate and rodent models of SCI. Despite decades of basic research and clinical trials, therapeutic options remain limited. This is largely due to the fact that (i) spinal cord structure and function is very complex and still poorly understood, (ii) there are many species differences which can make translation from the rodent to primate difficult, and (iii) we are still some way from determining the detailed multilevel pathway responses affecting recovery. There has also been little focus, until recently, on the sensory pathways involved in SCI and recovery, which are so critical to hand function and the recovery process. The potential for recovery in any individual depends on many factors, including the location and size of the injury, the extent of sparing of fiber tracts, and the post-injury inflammatory response. There is also a progression of change over the first weeks and months that must be taken into account when assessing recovery. There are currently no good biomarkers of recovery, and while axon terminal sprouting is frequently used in the experimental setting as an indicator of circuit remodeling and “recovery,” the correlation between sprouting and functional recovery deserves scrutiny.
Andrew J. Parker
Humans and some animals can use their two eyes in cooperation to detect and discriminate parts of the visual scene based on depth. Owing to the horizontal separation of the eyes, each eye obtains a slightly different view of the scene in front of the head. These small differences are processed by the nervous system to generate a sense of binocular depth. As humans, we experience an impression of solidity that is fully three-dimensional; this impression is called stereopsis and is what we appreciate when we watch a 3D movie or look into a stereoscopic viewer. While the basic perceptual phenomena of stereoscopic vision have been known for some time, it is mainly within the last 50 years that we have gained an understanding of how the nervous system delivers this sense of depth. This period of research began with the identification of neuronal signals for binocular depth in the primary visual cortex. Building on that finding, subsequent work has traced the signaling pathways for binocular stereoscopic depth forward into extrastriate cortex and further on into cortical areas concerning with sensorimotor integration. Within these pathways, neurons acquire sensitivity to more complex, higher order aspects of stereoscopic depth. Signals relating to the relative depth of visual features can be identified in the extrastriate cortex, which is a form of selectivity not found in the primary visual cortex. Over the same time period, knowledge of the organization of binocular vision in animals that inhabit a wide diversity of ecological niches has substantially increased. The implications of these findings for developmental and adult plasticity of the visual nervous system and onset of the clinical condition of amblyopia are explored in this article. Amblyopic vision is associated with a cluster of different visual and oculomotor symptoms, but the loss of high-quality stereoscopic depth performance is one of the consistent clinical features. Understanding where and how those losses occur in the visual brain is an important goal of current research, for both scientific and clinical reasons.
Jose M. Alonso and Harvey A. Swadlow
The thalamocortical pathway is the main route of sensory information to the cerebral cortex. Vision, touch, hearing, taste, and balance all depend on the integrity of this pathway that connects the thalamic structures receiving sensory input with the cortical areas specialized in each sensory modality. Only the ancient sense of smell is independent of the thalamus, gaining access to cortex through more anterior routes. While the thalamocortical pathway targets different layers of the cerebral cortex, its main stream projects to the middle layers and has axon terminals that are dense, spatially restricted, and highly specific in their connections. The remarkable specificity of these thalamocortical connections allows for a precise reconstruction of the sensory dimensions that need to be most finely sampled, such as spatial acuity in vision and sound frequency in hearing. The thalamic axon terminals also segregate topographically according to their stimulus preferences, providing a simple principle to build cortical sensory maps: neighboring values in sensory space are represented by neighboring points within the cortex.
Thalamocortical processing is not static. It is continuously modulated by the brain stem and corticothalamic feedback based on the level of attention and alertness, and during sleep or general anesthesia. When alert, visual thalamic responses become stronger, more reliable, more sustained, more effective at sampling fast changes in the scene, and more linearly related to the stimulus. The high firing rates of the alert state make thalamocortical synapses chronically depressed and excitatory synaptic potentials less dependent on temporal history, improving even further the linear relation between stimulus and response. In turn, when alertness wanes, the thalamus reduces its firing rate, and starts generating spike bursts that drive large postsynaptic responses and keep the cortex responsive to sudden stimulus changes.
Susan C. P. Renn and Nadia Aubin-Horth
Several species show diversity in reproductive patterns that result from phenotypic plasticity. This reproductive plasticity is found for example in mate choice, parental care, reproduction suppression, reproductive tactics, sex role, and sex reversal. Studying the genome-wide changes in transcription that are associated with these plastic phenotypes will help answer several questions, including those regarding which genes are expressed and where they are expressed when an individual is faced with a reproductive choice, as well as those regarding whether males and females have the same brain genomic signature when they express the same behaviors, or if they activate sex-specific molecular pathways to output similar behavioral responses. The comparative approach of studying transcription in a wide array of species allows us to uncover genes, pathways, and biological functions that are repeatedly co-opted (“genetic toolkit”) as well as those that are unique to a particular system (“genomic signature”). Additionally, by quantifying the transcriptome, a labile trait, using time series has the potential to uncover the causes and consequences of expressing one plastic phenotype or another. There are of course gaps in our knowledge of reproductive plasticity, but no shortage of possibilities for future directions.