1-10 of 176 Results

Article

Crossmodal Plasticity, Sensory Experience, and Cognition  

Valeria Vinogradova and Velia Cardin

Crossmodal plasticity occurs when sensory regions of the brain adapt to process sensory inputs from different modalities. This is seen in cases of congenital and early deafness and blindness, where, in the absence of their typical inputs, auditory and visual cortices respond to other sensory information. Crossmodal plasticity in deaf and blind individuals impacts several cognitive processes, including working memory, attention, switching, numerical cognition, and language. Crossmodal plasticity in cognitive domains demonstrates that brain function and cognition are shaped by the interplay between structural connectivity, computational capacities, and early sensory experience.

Article

Jean-Martin Charcot (1825–1893)  

Olivier Walusinski

Jean-Martin Charcot (1825–1893), son of a Parisian craftsman, went on to a brilliant university career and worked his way to the top of the hospital hierarchy. Becoming a resident in 1858 at the women’s nursing home and asylum at La Salpêtrière Hospital, he returned there in 1868 as chief physician. Observing more than 2,000 elderly women, he first worked as a geriatrician–internist, leading him to describe thyroid pathology, cruoric pulmonary embolism, and so forth. To deal with the numerous nervous system pathologies, he applied the anatomoclinical method with the addition of microscopy. In less than around 10 years, his perspicacious clinical eye enabled him to describe Parkinson’s disease, multiple sclerosis, amyotrophic lateral sclerosis, and tabetic arthropathy and to identify medullary localizations, for example. Already aware of functional neurological disorders, at that time referred to as hysteria and frequent to this day, Charcot used hypnosis to try to decipher the pathophysiology. His thinking gradually evolved from looking for lesions to recognizing triggering psychological trauma. This prolonged search, misinterpreted for years, opened the way to fine, precise clinical semiology, specific to neurology and psychosomatic medicine. Charcot knew how to surround himself with a cohort of brilliant clinicians, who often became as famous as he was, notably Pierre Marie (1853–1940), Georges Gilles de la Tourette (1857–1904), Joseph Babiński (1857–1932), and Pierre Janet (1859–1947). This cohort and the breadth of Charcot’s innovative work define what is now classically called the “Salpêtrière School.”

Article

Diagnosis and Treatment of Gambling Addiction  

Gemma Mestre-Bach and Marc N. Potenza

Gambling disorder (GD) is a relatively rare psychiatric concern that may carry substantial individual, familial, and societal harms. GD often presents complex challenges, with high prevalence in adolescents and young adults. GD often co-occurs with other psychiatric disorders, complicating treatment. GD has multiple biopsychosocial contributions, with genetic, environmental, and psychological factors implicated. Advances in neuroimaging and neurochemistry offer insights into the neurobiology of GD. GD diagnostic criteria have evolved, although identification often remains challenging given shame, stigma, ambivalence regarding treatment and limited screening. Because many people with GD do not receive treatment, identification (screening and treatment outreach) and therapeutic (behavioral, neuromodulatory, and pharmacological) approaches warrant increased consideration and development..

Article

The Natural Scene Network  

Diane Beck and Dirk B. Walther

Interest in the neural representations of scenes centered first on the idea that the primate visual system evolved in the context of natural scene statistics, but with the advent of functional magnetic resonance imaging, interest turned to scenes as a category of visual representation distinct from that of objects, faces, or bodies. Research comparing such categories revealed a scene network comprised of the parahippocampal place area, the medial place area, and the occipital place area. The network has been linked to a variety of functions, including navigation, categorization, and contextual processing. Moreover, much is known about both the visual representations of scenes within the network as well as its role in and connections to the brain’s semantic system. To fully understand the scene network, however, more work is needed to both break it down into its constituent parts and integrate what is known into a coherent system or systems.

Article

Neural Processing of Speech Using Intracranial Electroencephalography: Sound Representations in the Auditory Cortex  

Liberty S. Hamilton

When people listen to speech and other natural sounds, their brains must take in a noisy acoustic signal and transform it into a robust mapping that eventually helps them communicate and understand the world around them. People hear what was said, who said it, and how they said it, and each of these aspects is encoded in brain activity across different auditory regions. Intracranial recordings in patients with epilepsy, also called electrocorticography or stereoelectroencephalography, have provided a unique window into understanding these processes at a high spatiotemporal resolution. These intracranial recordings are typically performed during clinical treatment for drug-resistant epilepsy or to monitor brain function during neurosurgery. The access to direct recordings of activity in the human brain is a benefit of this method, but it comes with important caveats. Research using intracranial recordings has uncovered how the brain represents acoustic information, including frequency, spectrotemporal modulations, and pitch, and how that information progresses to more complex representations, including phonological information, relative pitch, and prosody. In addition, intracranial recordings have been used to uncover the role of attention and context on top-down modification of perceptual information in the brain. Finally, research has shown both overlapping and distinct brain responses for speech and other natural sounds such as music.

Article

Understanding How Humans Learn and Adapt to Changing Environments  

Daphne Bavelier and Aaron Cochrane

Compared to other animals or to artificial agents, humans are unique in the extent of their abilities to learn and adapt to changing environments. When focusing on skill learning and model-based approaches, learning can be conceived as a progression of increasing, then decreasing, dimensions of representing knowledge. First, initial learning demands exploration of the learning space and the identification of the relevant dimensions for the novel task at hand. Second, intermediate learning requires a refinement of these relevant dimensions of knowledge and behavior to continue improving performance while increasing efficiency. Such improvements utilize chunking or other forms of dimensionality reduction to diminish task complexity. Finally, late learning ensures automatization of behavior through habit formation and expertise development, thereby reducing the need to effortfully control behavior. While automatization greatly increases efficiency, there is also a trade-off with the ability to generalize, with late learning tending to be highly specific to the learned features and contexts. In each of these phases a variety of interacting factors are relevant: Declarative instructions, prior knowledge, attentional deployment, and cognitive fitness have unique roles to play. Neural contributions to processes involved also shift from earlier to later points in learning as effortfulness initially increases and then gives way to automaticity. Interestingly, video games excel at providing uniquely supportive environments to guide the learner through each of these learning stages. This fact makes video games a useful tool for both studying learning, due to their engaging nature and dynamic range of complexity, as well as engendering learning in domains such as education or cognitive training.

Article

Functional Specialization Across the Visual Cortex  

Erez Freud, Tzvi Ganel, and Galia Avidan

Vision is the most important sensory modality for humans, serving a range of fundamental daily behaviors from recognizing objects, people, places, and actions to navigation and visually guided interactions with objects and other individuals. One of the most prominent accounts of cortical functional specialization implies that the visual cortex is segregated into two pathways. The ventral pathway originates from the early visual cortex in the occipital lobe and projects to the inferior surface of the temporal cortex, and it mediates vision for perception. The dorsal pathway extends from the occipital lobe to the posterior portion of the parietal cortex, and it mediates vision for action. This key characterization of the visual system is supported by classic neuropsychological, behavioral, and neuroimaging evidence. Recent research offers new insights on the developmental trajectory of this dissociation as well as evidence for interactions between the two pathways. Importantly, an emerging hypothesis points to the existence of a third visual pathway located on the lateral surface of the ventral pathway and its potential roles in action recognition and social cognition.

Article

Statistics, Computation, and Coding in the Retina  

Gregory Schwartz

One of the most common ways to approach descriptions of the function of brains is with the language of computation. Neuroscientists often speak about what the brain computes and how it performs the computation using biological hardware. Theories of neural computation in most parts of the central nervous system of vertebrates are difficult to test in satisfying ways because often only partial information is available. Computations can be distributed over millions of neurons and vast regions of the brain, and the definitions of the computations themselves are often either abstract or lack a compelling, quantitative, causal link to a specific behavior. Although the vertebrate retina is a highly complex part of the central nervous system comprising approximately 150 different cell types, studying computation in the retina has certain advantages that have enabled the field to lead the way in some disciplines of computational neuroscience. These advantages include advanced knowledge of cell types, the repeating “mosaic” structure of retinal circuits, the ability to control precisely the full input (spatiotemporal patterns of light) while recording the full output (retinal ganglion cell spikes), and quantitative links to certain innate visual behaviors. Through the lens of statistics, many retinal computations can be framed as measurements of properties of probability distributions. The ways evolution has found to make these measurements with biological components are both elegant in their simplicity and powerful in their flexibility, in many cases far exceeding the sophistication of modern human-made digital imaging technology. Fast adaptation to both the mean and the variance of time-varying light distributions allows the retina to encode the enormous dynamic range of natural images within the limited dynamic range of neurons. Signal and noise distributions are estimated and combined in ways approaching theoretical limits. Objects are localized with precision far exceeding individual receptive fields by using a form of triangulation. Predictive information about motion statistics is represented in the population code. These examples and others enable analysis of retinal computation with tools from computer science, engineering, statistics, and information theory, serving as a model for computational neuroscience.

Article

Neural Oscillations in Audiovisual Language and Communication  

Linda Drijvers and Sara Mazzini

How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication. There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.

Article

Visual Perception in the Human Brain: How the Brain Perceives and Understands Real-World Scenes  

Clemens G. Bartnik and Iris I. A. Groen

How humans perceive and understand real-world scenes is a long-standing question in neuroscience, cognitive psychology, and artificial intelligence. Initially, it was thought that scenes are constructed and represented by their component objects. An alternative view proposed that scene perception starts by extracting global features (e.g., spatial layout) first and individual objects in later stages. A third framework focuses on how the brain not only represents objects and layout but how this information combines to allow determining possibilities for (inter)action that the environment offers us. The discovery of scene-selective regions in the human visual system sparked interest in how scenes are represented in the brain. Experiments using functional magnetic resonance imaging show that multiple types of information are encoded in the scene-selective regions, while electroencephalography and magnetoencephalography measurements demonstrate links between the rapid extraction of different scene features and scene perception behavior. Computational models such as deep neural networks offer further insight by how training networks on different scene recognition tasks results in the computation of diagnostic features that can then be tested for their ability to predict activity in human brains when perceiving a scene. Collectively, these findings suggest that the brain flexibly and rapidly extracts a variety of information from scenes using a distributed network of brain regions.