1-3 of 3 Results

  • Keywords: sounds x
Clear all

Article

Neural Processing of Speech Using Intracranial Electroencephalography: Sound Representations in the Auditory Cortex  

Liberty S. Hamilton

When people listen to speech and other natural sounds, their brains must take in a noisy acoustic signal and transform it into a robust mapping that eventually helps them communicate and understand the world around them. People hear what was said, who said it, and how they said it, and each of these aspects is encoded in brain activity across different auditory regions. Intracranial recordings in patients with epilepsy, also called electrocorticography or stereoelectroencephalography, have provided a unique window into understanding these processes at a high spatiotemporal resolution. These intracranial recordings are typically performed during clinical treatment for drug-resistant epilepsy or to monitor brain function during neurosurgery. The access to direct recordings of activity in the human brain is a benefit of this method, but it comes with important caveats. Research using intracranial recordings has uncovered how the brain represents acoustic information, including frequency, spectrotemporal modulations, and pitch, and how that information progresses to more complex representations, including phonological information, relative pitch, and prosody. In addition, intracranial recordings have been used to uncover the role of attention and context on top-down modification of perceptual information in the brain. Finally, research has shown both overlapping and distinct brain responses for speech and other natural sounds such as music.

Article

Neural Population Coding of Natural Sounds in Non-flying Mammals  

Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.

Article

Plasticity of Information Processing in the Auditory System  

Andrew J. King

Information processing in the auditory system shows considerable adaptive plasticity across different timescales. This ranges from very rapid changes in neuronal response properties—on the order of hundreds of milliseconds when the statistics of sounds vary or seconds to minutes when their behavioral relevance is altered—to more gradual changes that are shaped by experience and learning. Many aspects of auditory processing and perception are sculpted by sensory experience during sensitive or critical periods of development. This developmental plasticity underpins the acquisition of language and musical skills, matches neural representations in the brain to the statistics of the acoustic environment, and enables the neural circuits underlying the ability to localize sound to be calibrated by the acoustic consequences of growth-related changes in the anatomy of the body. Although the length of these critical periods depends on the aspect of auditory processing under consideration, varies across species and brain level, and may be extended by experience and other factors, it is generally accepted that the potential for plasticity declines with age. Nevertheless, a substantial degree of plasticity is exhibited in adulthood. This is important for the acquisition of new perceptual skills; facilitates improvements in the detection or discrimination of fine differences in sound properties; and enables the brain to compensate for changes in inputs, including those resulting from hearing loss. In contrast to the plasticity that shapes the developing brain, perceptual learning normally requires the sound attribute in question to be behaviorally relevant and is driven by practice or training on specific tasks. Progress has recently been made in identifying the brain circuits involved and the role of neuromodulators in controlling plasticity, and an understanding of plasticity in the central auditory system is playing an increasingly important role in the treatment of hearing disorders.