1-3 of 3 Results

  • Keywords: speech x
Clear all

Article

Neural Processing of Speech Using Intracranial Electroencephalography: Sound Representations in the Auditory Cortex  

Liberty S. Hamilton

When people listen to speech and other natural sounds, their brains must take in a noisy acoustic signal and transform it into a robust mapping that eventually helps them communicate and understand the world around them. People hear what was said, who said it, and how they said it, and each of these aspects is encoded in brain activity across different auditory regions. Intracranial recordings in patients with epilepsy, also called electrocorticography or stereoelectroencephalography, have provided a unique window into understanding these processes at a high spatiotemporal resolution. These intracranial recordings are typically performed during clinical treatment for drug-resistant epilepsy or to monitor brain function during neurosurgery. The access to direct recordings of activity in the human brain is a benefit of this method, but it comes with important caveats. Research using intracranial recordings has uncovered how the brain represents acoustic information, including frequency, spectrotemporal modulations, and pitch, and how that information progresses to more complex representations, including phonological information, relative pitch, and prosody. In addition, intracranial recordings have been used to uncover the role of attention and context on top-down modification of perceptual information in the brain. Finally, research has shown both overlapping and distinct brain responses for speech and other natural sounds such as music.

Article

Plasticity of Information Processing in the Auditory System  

Andrew J. King

Information processing in the auditory system shows considerable adaptive plasticity across different timescales. This ranges from very rapid changes in neuronal response properties—on the order of hundreds of milliseconds when the statistics of sounds vary or seconds to minutes when their behavioral relevance is altered—to more gradual changes that are shaped by experience and learning. Many aspects of auditory processing and perception are sculpted by sensory experience during sensitive or critical periods of development. This developmental plasticity underpins the acquisition of language and musical skills, matches neural representations in the brain to the statistics of the acoustic environment, and enables the neural circuits underlying the ability to localize sound to be calibrated by the acoustic consequences of growth-related changes in the anatomy of the body. Although the length of these critical periods depends on the aspect of auditory processing under consideration, varies across species and brain level, and may be extended by experience and other factors, it is generally accepted that the potential for plasticity declines with age. Nevertheless, a substantial degree of plasticity is exhibited in adulthood. This is important for the acquisition of new perceptual skills; facilitates improvements in the detection or discrimination of fine differences in sound properties; and enables the brain to compensate for changes in inputs, including those resulting from hearing loss. In contrast to the plasticity that shapes the developing brain, perceptual learning normally requires the sound attribute in question to be behaviorally relevant and is driven by practice or training on specific tasks. Progress has recently been made in identifying the brain circuits involved and the role of neuromodulators in controlling plasticity, and an understanding of plasticity in the central auditory system is playing an increasingly important role in the treatment of hearing disorders.

Article

Neural Oscillations in Audiovisual Language and Communication  

Linda Drijvers and Sara Mazzini

How do neural oscillations support human audiovisual language and communication? Considering the rhythmic nature of audiovisual language, in which stimuli from different sensory modalities unfold over time, neural oscillations represent an ideal candidate to investigate how audiovisual language is processed in the brain. Modulations of oscillatory phase and power are thought to support audiovisual language and communication in multiple ways. Neural oscillations synchronize by tracking external rhythmic stimuli or by re-setting their phase to presentation of relevant stimuli, resulting in perceptual benefits. In particular, synchronized neural oscillations have been shown to subserve the processing and the integration of auditory speech, visual speech, and hand gestures. Furthermore, synchronized oscillatory modulations have been studied and reported between brains during social interaction, suggesting that their contribution to audiovisual communication goes beyond the processing of single stimuli and applies to natural, face-to-face communication. There are still some outstanding questions that need to be answered to reach a better understanding of the neural processes supporting audiovisual language and communication. In particular, it is not entirely clear yet how the multitude of signals encountered during audiovisual communication are combined into a coherent percept and how this is affected during real-world dyadic interactions. In order to address these outstanding questions, it is fundamental to consider language as a multimodal phenomenon, involving the processing of multiple stimuli unfolding at different rhythms over time, and to study language in its natural context: social interaction. Other outstanding questions could be addressed by implementing novel techniques (such as rapid invisible frequency tagging, dual-electroencephalography, or multi-brain stimulation) and analysis methods (e.g., using temporal response functions) to better understand the relationship between oscillatory dynamics and efficient audiovisual communication.