James S.H. Wong and Catharine H. Rankin
The nematode, Caenorhabditis elegans (C. elegans), is an organism useful for the study of learning and memory at the molecular, cellular, neural circuitry, and behavioral levels. Its genetic tractability, transparency, connectome, and accessibility for in vivo cellular and molecular analyses are a few of the characteristics that make the organism such a powerful system for investigating mechanisms of learning and memory. It is able to learn and remember across many sensory modalities, including mechanosensation, chemosensation, thermosensation, oxygen sensing, and carbon dioxide sensing. C. elegans habituates to mechanosensory stimuli, and shows short-, intermediate-, and long-term memory, and context conditioning for mechanosensory habituation. The organism also displays chemotaxis to various chemicals, such as diacetyl and sodium chloride. This behavior is associated with several forms of learning, including state-dependent learning, classical conditioning, and aversive learning. C. elegans also shows thermotactic learning in which it learns to associate a particular temperature with the presence or absence of food. In addition, both oxygen preference and carbon dioxide avoidance in C. elegans can be altered by experience, indicating that they have memory for the oxygen or carbon dioxide environment they were reared in.
Many of the genes found to underlie learning and memory in C. elegans are homologous to genes involved in learning and memory in mammals; two examples are crh-1, which is the C. elegans homolog of the cAMP response element-binding protein (CREB), and glr-1, which encodes an AMPA glutamate receptor subunit. Both of these genes are involved in long-term memory for tap habituation, context conditioning in tap habituation, and chemosensory classical conditioning. C. elegans offers the advantage of having a very small nervous system (302 neurons), thus it is possible to understand what these conserved genes are doing at the level of single identified neurons. As many mechanisms of learning and memory in C. elegans appear to be similar in more complex organisms including humans, research with C. elegans aids our ever-growing understanding of the fundamental mechanisms of learning and memory across the animal kingdom.
Tim C. Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte
The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.
Kathleen E. Cullen
As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.
Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina
Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.
Sabine Kastner and Timothy J. Buschman
Natural scenes are cluttered and contain many objects that cannot all be processed simultaneously. Due to this limited processing capacity, neural mechanisms are needed to selectively enhance the information that is most relevant to one’s current behavior and to filter unwanted information. We refer to these mechanisms as “selective attention.” Attention has been studied extensively at the behavioral level in a variety of paradigms, most notably, Treisman’s visual search and Posner’s paradigm. These paradigms have also provided the basis for studies directed at understanding the neural mechanisms underlying attentional selection, both in the form of neuroimaging studies in humans and intracranial electrophysiology in non-human primates. The selection of behaviorally relevant information is mediated by a large-scale network that includes regions in all major lobes as well as subcortical structures. Attending to a visual stimulus modulates processing across the visual processing hierarchy with stronger effects in higher-order areas. Current research is aimed at characterizing the functions of the different network nodes as well as the dynamics of their functional connectivity.