1-11 of 11 Results  for:

  • Sensory Systems x
  • Cognitive Neuroscience x
Clear all

Article

Richard L. Doty

Decreased ability to smell is common in older persons. Some demonstrable smell loss is present in more than 50% of those 65 to 80 years of age, with up to 10% having no smell at all (anosmia). Over the age of 80, 75% exhibit some loss with up to 20% being totally anosmic. The causes of these decrements appear multifactorial and likely include altered intranasal airflow patterns, cumulative damage to the olfactory receptor cells from viruses and other environmental insults, decrements in mucosal metabolizing enzymes, closure of the cribriform plate foramina through which olfactory receptor cells axons project to the brain, loss of selectivity of receptor cells to odorants, and altered neurotransmission, including that exacerbated in some age-related neurodegenerative diseases.

Article

Megan A.K. Peters

The human brain processes noisy information to help make adaptive choices under uncertainty. Accompanying these decisions about incoming evidence is a sense of confidence: a feeling about whether a decision is correct. Confidence typically covaries with the accuracy of decisions, in that higher confidence is associated with higher decisional accuracy. In the laboratory, decision confidence is typically measured by asking participants to make judgments about stimuli or information (type 1 judgments) and then to rate their confidence on a rating scale or by engaging in wagering (type 2 judgments). The correspondence between confidence and accuracy can be quantified in a number of ways, some based on probability theory and signal detection theory. But decision confidence does not always reflect only the probability that a decision is correct; confidence can also reflect many other factors, including other estimates of noise, evidence magnitude, nearby decisions, decision time, and motor movements. Confidence is thought to be computed by a number of brain regions, most notably areas in the prefrontal cortex. And, once computed, confidence can be used to drive other behaviors, such as learning rates or social interaction.

Article

Tim C. Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

Article

The brain has limited processing capacities. Attention selection processes are continuously shaping humans’ world perception. Understanding the mechanisms underlying such covert cognitive processes requires the combination of psychophysical and electrophysiological investigation methods. This combination allows researchers to describe how individual neurons and neuronal populations encode attentional function. Direct access to neuronal information through innovative electrophysiological approaches, additionally, allows the tracking of covert attention in real time. These converging approaches capture a comprehensive view of attentional function.

Article

Eliot A. Brenowitz

Animals produce communication signals to attract mates and deter rivals during their breeding season. The coincidence in timing results from the modulation of signaling behavior and neural activity by sex steroid hormones associated with reproduction. Adrenal steroids can influence signaling for aggressive interactions outside the breeding season. Androgenic and estrogenic hormones act on brain circuits that regulate the motivation to produce and respond to signals, the motor production of signals, and the sensory perception of signals. Signal perception, in turn, can stimulate gonadal development.

Article

As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.

Article

Sequences of actions and experiences are a central part of daily life in many species. Sequences consist of a set of ordered steps with a distinct beginning and end. They are defined by the serial order and relationships between items, though not necessarily by precise timing intervals. Sequences can be composed from a wide range of elements, including motor actions, perceptual experiences, memories, complex behaviors, or abstract goals. However, despite this variation, different types of sequences may share common features in neural coding. Examining the neural responses that support sequences is important not only for understanding the sequential behavior in daily life but also for investigating the array of diseases and disorders that impact sequential processes and the impact of therapeutics used to treat them. Research into the neural coding of sequences can be organized into the following broad categories: responses to ordinal position, coding of adjacency and inter-item relationships, boundary responses, and gestalt coding (representation of the sequence as a whole). These features of sequence coding have been linked to changes in firing rate patterns and neuronal oscillations across a range of cortical and subcortical brain areas and may be integrated in the lateral prefrontal cortex. Identification of these coding schemes has laid out an outline for understanding how sequences are represented at a neural level. Expanding from this work, future research faces fundamental questions about how these coding schemes are linked together to generate the complex range of sequential processes that influence cognition and behavior across animal species.

Article

Tamar Makin and London Plasticity Lab

Phantom sensations are experienced by almost every person who has lost their hand in adulthood. This mysterious phenomenon spans the full range of bodily sensations, including the sense of touch, temperature, movement, and even the sense of wetness. For a majority of upper-limb amputees, these sensations will also be at times unpleasant, painful, and for some even excruciating to the point of debilitating, causing a serious clinical problem, termed phantom limb pain (PLP). Considering the sensory organs (the receptors in the skin, muscle or tendon) are physically missing, in order to understand the origins of phantom sensations and pain the potential causes must be studied at the level of the nervous system, and the brain in particular. This raises the question of what happens to a fully developed part of the brain that becomes functionally redundant (e.g. the sensorimotor hand area after arm amputation). Relatedly, what happens to the brain representation of a body part that becomes overused (e.g. the intact hand, on which most amputees heavily rely for completing daily tasks)? Classical studies in animals show that the brain territory in primary somatosensory cortex (S1) that was “freed up” due to input loss (hereafter deprivation) becomes activated by other body part representations, those neighboring the deprived cortex. If neural resources in the deprived hand area get redistributed to facilitate the representation of other body parts following amputation, how does this process relate to persistent phantom sensation arising from the amputated hand? Subsequent work in humans, mostly with noninvasive neuroimaging and brain stimulation techniques, have expanded on the initial observations of cortical remapping in two important ways. First, research with humans allows us to study the perceptual consequence of remapping, particularly with regards to phantom sensations and pain. Second, by considering the various compensatory strategies amputees adopt in order to account for their disability, including overuse of their intact hand and learning to use an artificial limb, use-dependent plasticity can also be studied in amputees, as well as its relationship to deprivation-triggered plasticity. Both of these topics are of great clinical value, as these could inform clinicians how to treat PLP, and how to facilitate rehabilitation and prosthesis usage in particular. Moreover, research in humans provides new insight into the role of remapping and persistent representation in facilitating (or hindering) the realization of emerging technologies for artificial limb devices, with special emphasis on the role of embodiment. Together, this research affords a more comprehensive outlook at the functional consequences of cortical remapping in amputees’ primary sensorimotor cortex.

Article

Sabine Kastner and Timothy J. Buschman

Natural scenes are cluttered and contain many objects that cannot all be processed simultaneously. Due to this limited processing capacity, neural mechanisms are needed to selectively enhance the information that is most relevant to one’s current behavior and to filter unwanted information. We refer to these mechanisms as “selective attention.” Attention has been studied extensively at the behavioral level in a variety of paradigms, most notably, Treisman’s visual search and Posner’s paradigm. These paradigms have also provided the basis for studies directed at understanding the neural mechanisms underlying attentional selection, both in the form of neuroimaging studies in humans and intracranial electrophysiology in non-human primates. The selection of behaviorally relevant information is mediated by a large-scale network that includes regions in all major lobes as well as subcortical structures. Attending to a visual stimulus modulates processing across the visual processing hierarchy with stronger effects in higher-order areas. Current research is aimed at characterizing the functions of the different network nodes as well as the dynamics of their functional connectivity.

Article

How humans perceive and understand real-world scenes is a long-standing question in neuroscience, cognitive psychology, and artificial intelligence. Initially, it was thought that scenes are constructed and represented by their component objects. An alternative view proposed that scene perception starts by extracting global features (e.g., spatial layout) first and individual objects in later stages. A third framework focuses on how the brain not only represents objects and layout but how this information combines to allow determining possibilities for (inter)action that the environment offers us. The discovery of scene-selective regions in the human visual system sparked interest in how scenes are represented in the brain. Experiments using functional magnetic resonance imaging show that multiple types of information are encoded in the scene-selective regions, while electroencephalography and magnetoencephalography measurements demonstrate links between the rapid extraction of different scene features and scene perception behavior. Computational models such as deep neural networks offer further insight by how training networks on different scene recognition tasks results in the computation of diagnostic features that can then be tested for their ability to predict activity in human brains when perceiving a scene. Collectively, these findings suggest that the brain flexibly and rapidly extracts a variety of information from scenes using a distributed network of brain regions.

Article

Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.