1-10 of 46 Results  for:

  • Psycholinguistics x
Clear all

Article

Functional categories carry little or no semantic content by themselves and contribute crucially to sentence structure. In the generative framework, they are assumed to mark and head functional projections in the basic hierarchical structure underlying each phrase or sentence. Given their intertwining with grammar, child language researchers have long been attracted by the development of functional categories. To a child, it is important to differentiate functional categories from lexical categories and relate each of them to the hidden hierarchical structure of the phrase or sentence. The learning of a functional category is no easy task and implies the development of different dimensions of linguistic knowledge, including the lexical realization of the functional category in the ambient language, the specific grammatical function it serves, the abstract underlying structure, and the semantic properties of the associated structure. A central issue in the acquisition of functional categories concerns whether children have access to functional categories early in language development. Differing accounts have been proposed. According to the maturational view, functional categories are absent in children’s initial grammar and mature later. In contrast to the maturational view is the continuity view, which assumes children’s continuous access to functional categories throughout language development. Cross-linguistic evidence from production and experimental studies has been accumulated in support of the continuity hypothesis. Mandarin Chinese has a rich inventory of function words, though it lacks overt inflectional markers. De, aspect markers, ba, and sentence final particles are among the most commonly used function words and play a fundamental role in sentence structure in Mandarin Chinese in that they are functional categories that head various functional projections in the hierarchical structure. Acquisition studies show that these function words emerge early in development and children’s use of these function words is mostly target-like, offering evidence for the continuity view of functional categories as well as insights into child grammar in Mandarin Chinese.

Article

Words are the backbone of language activity. An average 20-year-old native speaker of English will have a vocabulary of about 42,000 words. These words are connected with one another within the larger network of lexical knowledge that is termed the mental lexicon. The metaphor of a mental lexicon has played a central role in the development of theories of language and mind and has provided an intellectual meeting ground for psychologists, neurolinguists, and psycholinguists. Research on the mental lexicon has shown that lexical knowledge is not static. New words are acquired throughout the life span, creating very large increases in the richness of connectivity within the lexical system and changing the system as a whole. Because most people in the world speak more than one language, the default mental lexicon may be a multilingual one. Such a mental lexicon differs substantially from a lexicon of an individual language and would lead to the creation of new integrated lexical systems due to the pressure on the system to organize and access lexical knowledge in a homogenous manner. The mental lexicon contains both word knowledge and morphological knowledge. There is also evidence that it contains multiword strings such as idioms and lexical bundles. This speaks in support of a nonrestrictive “big tent” view of units of representation within the mental lexicon. Changes in research on lexical representations in language processing have emphasized lexical action and the role of learning. Although the metaphor of words as distinct representations within a lexical store has served to advance knowledge, it is more likely that words are best seen as networks of activity that are formed and affected by experience and learning throughout the life span.

Article

Over the past decades, psycholinguistic aspects of word processing have made a considerable impact on views of language theory and language architecture. In the quest for the principles governing the ways human speakers perceive, store, access, and produce words, inflection issues have provided a challenging realm of scientific inquiry, and a battlefield for radically opposing views. It is somewhat ironic that some of the most influential cognitive models of inflection have long been based on evidence from an inflectionally impoverished language like English, where the notions of inflectional regularity, (de)composability, predictability, phonological complexity, and default productivity appear to be mutually implied. An analysis of more “complex” inflection systems such as those of Romance languages shows that this mutual implication is not a universal property of inflection, but a contingency of poorly contrastive, nearly isolating inflection systems. Far from presenting minor faults in a solid, theoretical edifice, Romance evidence appears to call into question the subdivision of labor between rules and exceptions, the on-line processing vs. long-term memory dichotomy, and the distinction between morphological processes and lexical representations. A dynamic, learning-based view of inflection is more compatible with this data, whereby morphological structure is an emergent property of the ways inflected forms are processed and stored, grounded in universal principles of lexical self-organization and their neuro-functional correlates.

Article

Yu-Ying Chuang and R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.

Article

Nowadays, computer models of human language are instrumental to millions of people, who use them every day with little if any awareness of their existence and role. Their exponential development has had a huge impact on daily life through practical applications like machine translation or automated dialogue systems. It has also deeply affected the way we think about language as an object of scientific inquiry. Computer modeling of Romance languages has helped scholars develop new theoretical frameworks and new ways of looking at traditional approaches. In particular, computer modeling of lexical phenomena has had a profound influence on some fundamental issues in human language processing, such as the purported dichotomy between rules and exceptions, or grammar and lexicon, the inherently probabilistic nature of speakers’ perception of analogy and word internal structure, and their ability to generalize to novel items from attested evidence. Although it is probably premature to anticipate and assess the prospects of these models, their current impact on language research can hardly be overestimated. In a few years, data-driven assessment of theoretical models is expected to play an irreplaceable role in pacing progress in all branches of language sciences, from typological and pragmatic approaches to cognitive and formal ones.

Article

Valentina Bambini and Paolo Canal

Neurolinguistics is devoted to the study of the language-brain relationship, using the methodologies of neuropsychology and cognitive neuroscience to investigate how linguistic categories are grounded in the brain. Although the brain infrastructure for language is invariable across cultures, neural networks might operate differently depending on language-specific features. In this respect, neurolinguistic research on the Romance languages, mostly French, Italian, and Spanish, proved key to progress the field, especially with specific reference to how the neural infrastructure for language works in the case of more richly inflected systems than English. Among the most popular domains of investigation are agreement patterns, where studies on Spanish and Italian showed that agreement across features and domains (e.g., number or gender agreement) engages partially different neural substrates. Also, studies measuring the electrophysiological response suggested that agreement processing is a composite mechanism involving different temporal steps. Another domain is the noun-verb distinction, where studies on the Romance languages indicated that the brain is more sensitive to the greater morphosyntactic engagement of verbs compared with nouns rather than to the grammatical class distinction per se. Concerning language disorders, the Romance languages shed new light on inflectional errors in aphasic speakers and contributed to revise the notion of agrammatism, which is not simply omission of morphemes but might involve incorrect substitution from the inflectional paradigm. Also, research in the Romance domain showed variation in degree and pattern of reading impairments due to language-specific segmental and suprasegmental features. Despite these important contributions, the Romance family, with its multitude of languages and dialects and a richly documented diachronic evolution, is a still underutilized ‘treasure house’ for neurolinguistic research, with significant room for investigations exploring the brain signatures of language variation in time and space and refining the linking between linguistic categories and neurobiological primitives.

Article

Amalia Arvaniti

Prosody is an umbrella term used to cover a variety of interconnected and interacting phenomena, namely stress, rhythm, phrasing, and intonation. The phonetic expression of prosody relies on a number of parameters, including duration, amplitude, and fundamental frequency (F0). The same parameters are also used to encode lexical contrasts (such as tone), as well as paralinguistic phenomena (such as anger, boredom, and excitement). Further, the exact function and organization of the phonetic parameters used for prosody differ across languages. These considerations make it imperative to distinguish the linguistic phenomena that make up prosody from their phonetic exponents, and similarly to distinguish between the linguistic and paralinguistic uses of the latter. A comprehensive understanding of prosody relies on the idea that speech is prosodically organized into phrasal constituents, the edges of which are phonetically marked in a number of ways, for example, by articulatory strengthening in the beginning and lengthening at the end. Phrases are also internally organized either by stress, that is around syllables that are more salient relative to others (as in English and Spanish), or by the repetition of a relatively stable tonal pattern over short phrases (as in Korean, Japanese, and French). Both types of organization give rise to rhythm, the perception of speech as consisting of groups of a similar and repetitive pattern. Tonal specification over phrases is also used for intonation purposes, that is, to mark phrasal boundaries, and express information structure and pragmatic meaning. Taken together, the components of prosody help with the organization and planning of speech, while prosodic cues are used by listeners during both language acquisition and speech processing. Importantly, prosody does not operate independently of segments; rather, it profoundly affects segment realization, making the incorporation of an understanding of prosody into experimental design essential for most phonetic research.

Article

Neurolinguistic approaches to morphology include the main theories of morphological representation and processing in the human mind, such as full-listing, full-parsing, and hybrid dual-route models, and how the experimental evidence that has been acquired to support these theories uses different neurolinguistic paradigms (visual and auditory priming, violation, long-lag priming, picture-word interference, etc.) and methods (electroencephalography [EEG]/event-related brain potential [ERP], functional magnetic resonance imaging [fMRI], neuropsychology, and so forth).

Article

Petar Milin and James P. Blevins

Studies of the structure and function of paradigms are as old as the Western grammatical tradition. The central role accorded to paradigms in traditional approaches largely reflects the fact that paradigms exhibit systematic patterns of interdependence that facilitate processes of analogical generalization. The recent resurgence of interest in word-based models of morphological processing and morphological structure more generally has provoked a renewed interest in paradigmatic dimensions of linguistic structure. Current methods for operationalizing paradigmatic relations and determining the behavioral correlates of these relations extend paradigmatic models beyond their traditional boundaries. The integrated perspective that emerges from this work is one in which variation at the level of individual words is not meaningful in isolation, but rather guides the association of words to paradigmatic contexts that play a role in their interpretation.

Article

Speakers can transfer meanings to each other because they represent them in a perceptible form. Phonology and syntactic structure are two levels of linguistic form. Morphemes are situated in-between them. Like phonemes they have a phonological component, and like syntactic structures they carry relational information. A distinction can be made between inflectional and lexical morphology. Both are devices in the service of communicative efficiency, by highlighting grammatical and semantic relations, respectively. Morphological structure has also been studied in psycholinguistics, especially by researchers who are interested in the process of visual word recognition. They found that a word is recognized more easily when it belongs to a large morphological family, which suggests that the mental lexicon is structured along morphological lines. The semantic transparency of a word’s morphological structure plays an important role. Several findings also suggest that morphology plays an important role at a pre-lexical processing level as well. It seems that morphologically complex words are subjected to a process of blind morphological decomposition before lexical access is attempted.