1-11 of 11 Results  for:

  • Neurolinguistics x
Clear all

Article

Susan Edwards and Christos Salis

Aphasia is an acquired language disorder subsequent to brain damage in the left hemisphere. It is characterized by diminished abilities to produce and understand both spoken and written language compared with the speaker’s presumed ability pre-cerebral damage. The type and severity of the aphasia depends not only on the location and extent of the cerebral damage but also the effect the lesion has on connecting areas of the brain. Type and severity of aphasia is diagnosed in comparison with assumed normal adult language. Language changes associated with normal aging are not classed as aphasia. The diagnosis and assessment of aphasia in children, which is unusual, takes account of age norms. The most common cause of aphasia is a cerebral vascular accident (CVA) commonly referred to as a stroke, but brain damage following traumatic head injury such as road accidents or gunshot wounds can also cause aphasia. Aphasia following such traumatic events is non-progressive in contrast to aphasia arising from brain tumor, some types of infection, or language disturbances in progressive conditions such as Alzheimer’s disease, where the language disturbance increases as the disease progresses. The diagnosis of primary progressive aphasia (as opposed to non-progressive aphasia, the main focus of this article) is based on the following inclusion and exclusion criteria by M. Marsel Mesulam, in 2001. Inclusion criteria are as follows: Difficulty with language that interferes with activities of daily living and aphasia is the most prominent symptom. Exclusion criteria are as follows: Other non-degenerative disease or medical disorder, psychiatric diagnosis, episodic memory, visual memory, and visuo-perceptual impairment, and, finally, initial behavioral disturbance. Aphasia involves one or more of the building blocks of language, phonemes, morphology, lexis, syntax, and semantics; and the deficits occur in various clusters or patterns across the spectrum. The degree of impairment varies across modalities, with written language often, but not always, more affected than spoken language. In some cases, understanding of language is relatively preserved, in others both production and understanding are affected. In addition to varied degrees of impairment in spoken and written language, any or more than one component of language can be affected. At the most severe end of the spectrum, a person with aphasia may be unable to communicate by either speech or writing and may be able to understand virtually nothing or only very limited social greetings. At the least severe end of the spectrum, the aphasic speaker may experience occasional word finding difficulties, often difficulties involving nouns; but unlike difficulties in recalling proper nouns in normal aging, word retrieval problems in mild aphasia includes other word classes. Descriptions of different clusters of language deficits have led to the notion of syndromes. Despite great variations in the condition, patterns of language deficits associated with different areas of brain damage have been influential in understanding language-brain relationships. Increasing sophistication in language assessment and neurological investigations are contributing to a greater, yet still incomplete understanding of language-brain relationships.

Article

Matthew B. Winn and Peggy B. Nelson

Cochlear implants (CIs) are the most successful sensory implant in history, restoring the sensation of sound to thousands of persons who have severe to profound hearing loss. Implants do not recreate acoustic sound as most of us know it, but they instead convey a rough representation of the temporal envelope of signals. This sparse signal, derived from the envelopes of narrowband frequency filters, is sufficient for enabling speech understanding in quiet environments for those who lose hearing as adults and is enough for most children to develop spoken language skills. The variability between users is huge, however, and is only partially understood. CIs provide acoustic information that is sufficient for the recognition of some aspects of spoken language, especially information that can be conveyed by temporal patterns, such as syllable timing, consonant voicing, and manner of articulation. They are insufficient for conveying pitch cues and separating speech from noise. There is a great need for improving our understanding of functional outcomes of CI success beyond measuring percent correct for word and sentence recognitions. Moreover, greater understanding of the variability experienced by children, especially children and families from various social and cultural backgrounds, is of paramount importance. Future developments will no doubt expand the use of this remarkable device.

Article

There are two main theoretical traditions in semantics. One is based on realism, where meanings are described as relations between language and the world, often in terms of truth conditions. The other is cognitivistic, where meanings are identified with mental structures. This article presents some of the main ideas and theories within the cognitivist approach. A central tenet of cognitively oriented theories of meaning is that there are close connections between the meaning structures and other cognitive processes. In particular, parallels between semantics and visual processes have been studied. As a complement, the theory of embodied cognition focuses on the relation between actions and components of meaning. One of the main methods of representing cognitive meaning structures is to use images schemas and idealized cognitive models. Such schemas focus on spatial relations between various semantic elements. Images schemas are often constructed using Gestalt psychological notions, including those of trajector and landmark, corresponding to figure and ground. In this tradition, metaphors and metonymies are considered to be central meaning transforming processes. A related approach is force dynamics. Here, the semantic schemas are construed from forces and their relations rather than from spatial relations. Recent extensions involve cognitive representations of actions and events, which then form the basis for a semantics of verbs. A third approach is the theory of conceptual spaces. In this theory, meanings are represented as regions of semantic domains such as space, time, color, weight, size, and shape. For example, strong evidence exists that color words in a large variety of languages correspond to such regions. This approach has been extended to a general account of the semantics of some of the main word classes, including adjectives, verbs, and prepositions. The theory of conceptual spaces shows similarities to the older frame semantics and feature analysis, but it puts more emphasis on geometric structures. A general criticism against cognitive theories of semantics is that they only consider the meaning structures of individuals, but neglect the social aspects of semantics, that is, that meanings are shared within a community. Recent theoretical proposals counter this by suggesting that semantics should be seen as a meeting of minds, that is, communicative processes that lead to the alignment of meanings between individuals. On this approach, semantics is seen as a product of communication, constrained by the cognitive mechanisms of the individuals.

Article

Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.

Article

Mineharu Nakayama

The Japanese psycholinguistics research field is moving rapidly in many different directions as it includes various sub-linguistics fields (e.g., phonetics/phonology, syntax, semantics, pragmatics, discourse studies). Naturally, diverse studies have reported intriguing findings that shed light on our language mechanism. This article presents a brief overview of some of the notable early 21st century studies mainly from the language acquisition and processing perspectives. The topics are divided into various sections: the sound system, the script forms, reading and writing, morpho-syntactic studies, word and sentential meanings, and pragmatics and discourse studies sections. Studies on special populations are also mentioned. Studies on the Japanese sound system have advanced our understanding of L1 and L2 (first and second language) acquisition and processing. For instance, more evidence is provided that infants form adult-like phonological grammar by 14 months in L1, and disassociation of prosody is reported from one’s comprehension in L2. Various cognitive factors as well as L1 influence the L2 acquisition process. As the Japanese language users employ three script forms (hiragana, katakana, and kanji) in a single sentence, orthographic processing research reveal multiple pathways to process information and the influence of memory. Adult script decoding and lexical processing has been well studied and research data from special populations further helps us to understand our vision-to-language mapping mechanism. Morpho-syntactic and semantic studies include a long debate on the nativist (generative) and statistical learning approaches in L1 acquisition. In particular, inflectional morphology and quantificational scope interaction in L1 acquisition bring pros and cons of both approaches as a single approach. Investigating processing mechanisms means studying cognitive/perceptual devices. Relative clause processing has been well-discussed in Japanese because Japanese has a different word order (SOV) from English (SVO), allows unpronounced pronouns and pre-verbal word permutations, and has no relative clause marking at the verbal ending (i.e., morphologically the same as the matrix ending). Behavioral and neurolinguistic data increasingly support incremental processing like SVO languages and an expectancy-driven processor in our L1 brain. L2 processing, however, requires more study to uncover its mechanism, as the literature is scarce in both L2 English by Japanese speakers and L2 Japanese by non-Japanese speakers. Pragmatic and discourse processing is also an area that needs to be explored further. Despite the typological difference between English and Japanese, the studies cited here indicate that our acquisition and processing devices seem to adjust locally while maintaining the universal mechanism.

Article

Laurie Beth Feldman and Judith F. Kroll

We summarize findings from across a range of methods, including behavioral measures of overall processing speed and accuracy, electrophysiological indices that tap into the early time course of language processing, and neural measures using structural and functional imaging. We argue that traditional claims about rigid constraints on the ability of late bilinguals to exploit the meaning and form of the morphology and morphosyntax in a second language should be revised so as to move away from all or none command of structures motivated from strict dichotomies among linguistic categories of morphology. We describe how the dynamics of morphological processing in neither monolingual or bilingual speakers is easily characterized in terms of the potential to decompose words into their constituent morphemes and that morphosyntactic processing is not easily characterized in terms of categories of structures that are learnable and those that are unlearnable by bilingual and nonnative speakers. Instead, we emphasize the high degree of variability across individuals and plasticity within individuals in their ability to successfully learn and use even subtle aspects of a second language. Further, both of the bilingual’s two languages become active when even one language is engaged, and parallel activation has consequences that shape both languages, thus their influence is not in the unidirectional manner that was traditionally assumed. We briefly discuss the nature of possible constraints and directions for future research.

Article

Words are the backbone of language activity. An average 20-year-old native speaker of English will have a vocabulary of about 42,000 words. These words are connected with one another within the larger network of lexical knowledge that is termed the mental lexicon. The metaphor of a mental lexicon has played a central role in the development of theories of language and mind and has provided an intellectual meeting ground for psychologists, neurolinguists, and psycholinguists. Research on the mental lexicon has shown that lexical knowledge is not static. New words are acquired throughout the life span, creating very large increases in the richness of connectivity within the lexical system and changing the system as a whole. Because most people in the world speak more than one language, the default mental lexicon may be a multilingual one. Such a mental lexicon differs substantially from a lexicon of an individual language and would lead to the creation of new integrated lexical systems due to the pressure on the system to organize and access lexical knowledge in a homogenous manner. The mental lexicon contains both word knowledge and morphological knowledge. There is also evidence that it contains multiword strings such as idioms and lexical bundles. This speaks in support of a nonrestrictive “big tent” view of units of representation within the mental lexicon. Changes in research on lexical representations in language processing have emphasized lexical action and the role of learning. Although the metaphor of words as distinct representations within a lexical store has served to advance knowledge, it is more likely that words are best seen as networks of activity that are formed and affected by experience and learning throughout the life span.

Article

The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.

Article

Neurolinguistic approaches to morphology include the main theories of morphological representation and processing in the human mind, such as full-listing, full-parsing, and hybrid dual-route models, and how the experimental evidence that has been acquired to support these theories uses different neurolinguistic paradigms (visual and auditory priming, violation, long-lag priming, picture-word interference, etc.) and methods (electroencephalography [EEG]/event-related brain potential [ERP], functional magnetic resonance imaging [fMRI], neuropsychology, and so forth).

Article

Valentina Bambini and Paolo Canal

Neurolinguistics is devoted to the study of the language-brain relationship, using the methodologies of neuropsychology and cognitive neuroscience to investigate how linguistic categories are grounded in the brain. Although the brain infrastructure for language is invariable across cultures, neural networks might operate differently depending on language-specific features. In this respect, neurolinguistic research on the Romance languages, mostly French, Italian, and Spanish, proved key to progress the field, especially with specific reference to how the neural infrastructure for language works in the case of more richly inflected systems than English. Among the most popular domains of investigation are agreement patterns, where studies on Spanish and Italian showed that agreement across features and domains (e.g., number or gender agreement) engages partially different neural substrates. Also, studies measuring the electrophysiological response suggested that agreement processing is a composite mechanism involving different temporal steps. Another domain is the noun-verb distinction, where studies on the Romance languages indicated that the brain is more sensitive to the greater morphosyntactic engagement of verbs compared with nouns rather than to the grammatical class distinction per se. Concerning language disorders, the Romance languages shed new light on inflectional errors in aphasic speakers and contributed to revise the notion of agrammatism, which is not simply omission of morphemes but might involve incorrect substitution from the inflectional paradigm. Also, research in the Romance domain showed variation in degree and pattern of reading impairments due to language-specific segmental and suprasegmental features. Despite these important contributions, the Romance family, with its multitude of languages and dialects and a richly documented diachronic evolution, is a still underutilized ‘treasure house’ for neurolinguistic research, with significant room for investigations exploring the brain signatures of language variation in time and space and refining the linking between linguistic categories and neurobiological primitives.

Article

Over the past decades, psycholinguistic aspects of word processing have made a considerable impact on views of language theory and language architecture. In the quest for the principles governing the ways human speakers perceive, store, access, and produce words, inflection issues have provided a challenging realm of scientific inquiry, and a battlefield for radically opposing views. It is somewhat ironic that some of the most influential cognitive models of inflection have long been based on evidence from an inflectionally impoverished language like English, where the notions of inflectional regularity, (de)composability, predictability, phonological complexity, and default productivity appear to be mutually implied. An analysis of more “complex” inflection systems such as those of Romance languages shows that this mutual implication is not a universal property of inflection, but a contingency of poorly contrastive, nearly isolating inflection systems. Far from presenting minor faults in a solid, theoretical edifice, Romance evidence appears to call into question the subdivision of labor between rules and exceptions, the on-line processing vs. long-term memory dichotomy, and the distinction between morphological processes and lexical representations. A dynamic, learning-based view of inflection is more compatible with this data, whereby morphological structure is an emergent property of the ways inflected forms are processed and stored, grounded in universal principles of lexical self-organization and their neuro-functional correlates.