Psycholinguistics is the study of how language is acquired, represented, and used by the human mind; it draws on knowledge about both language and cognitive processes. A central topic of debate in psycholinguistics concerns the balance between storage and processing. This debate is especially evident in research concerning morphology, which is the study of word structure, and several theoretical issues have arisen concerning the question of how (or whether) morphology is represented and what function morphology serves in the processing of complex words. Five theoretical approaches have emerged that differ substantially in the emphasis placed on the role of morphemic representations during the processing of morphologically complex words. The first approach minimizes processing by positing that all words, even morphologically complex ones, are stored and recognized as whole units, without the use of morphemic representations. The second approach posits that words are represented and processed in terms of morphemic units. The third approach is a mixture of the first two approaches and posits that a whole-access route and decomposition route operate in parallel. A fourth approach posits that both whole word representations and morphemic representations are used, and that these two types of information interact. A fifth approach proposes that morphology is not explicitly represented, but rather, emerges from the co-activation of orthographic/phonological representations and semantic representations. These competing approaches have been evaluated using a wide variety of empirical methods examining, for example, morphological priming, the role of constituent and word frequency, and the role of morphemic position. For the most part, the evidence points to the involvement of morphological representations during the processing of complex words. However, the specific way in which these representations are used is not yet fully known.
Christina L. Gagné
Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.
Healthy aging is associated with many cognitive, linguistic, and behavioral changes. For example, adults’ reaction times slow on many tasks as they grow older, while their memories, appear to fade, especially for apparently basic linguistic information such as other people’s names. These changes have traditionally been thought to reflect declines in the processing power of human minds and brains as they age. However, from the perspective of the information-processing paradigm that dominates the study of mind, the question of whether cognitive processing capacities actually decline across the life span can only be scientifically answered in relation to functional models of the information processes that are presumed to be involved in cognition. Consider, for example, the problem of recalling someone’s name. We are usually reminded of the names of friends on a regular basis, and this makes us good at remembering them. However, as we move through life, we inevitably learn more names. Sometimes we hear these new names only once. As we learn each new name, the average exposure we will have had to any individual name we know is likely to decline, while the number of different names we know is likely to increase. This in turn is likely to make the task of recalling a particular name more complex. One consequence of this is as follows: If Mary can only recall names with 95% accuracy at age 60—when she knows 900 names—does she necessarily have a worse memory than she did at age 16, when she could recall any of only 90 names with 98% accuracy? Answering the question of whether Mary’s memory for names has actually declined (or improved even) will require some form of quantification of Mary’s knowledge of names at any given point in her life and the definition of a quantitative model that predicts expected recall performance for a given amount of name knowledge, as well as an empirical measure of the accuracy of the model across a wide range of circumstances. Until the early 21st century, the study of cognition and aging was dominated by approaches that failed to meet these requirements. Researchers simply established that Mary’s name recall was less accurate at a later age than it was at an earlier one, and took this as evidence that Mary’s memory processes had declined in some significant way. However, as computational approaches to studying cognitive—and especially psycholinguistic—processes and processing became more widespread, a number of matters related to the development of processing across the life span began to become apparent: First, the complexity involved in establishing whether or not Mary’s name recall did indeed become less accurate with age began to be better understood. Second, when the impact of learning on processing was controlled for, it became apparent that at least some processes showed no signs of decline at all in healthy aging. Third, the degree to which the environment—both in terms of its structure, and its susceptibility to change—further complicates our understanding of life-span cognitive performance also began to be better comprehended. These new findings not only promise to change our understanding of healthy cognitive aging, but also seem likely to alter our conceptions of cognition and language themselves.
Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning. In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.
Birgit Alber and Sabine Arndt-Lappe
Work on the relationship between morphology and metrical structure has mainly addressed three questions: 1. How does morphological constituent structure map onto prosodic constituent structure, i.e., the structure that is responsible for metrical organization? 2. What are the reflexes of morphological relations between complex words and their bases in metrical structure? 3. How variable or categorical are metrical alternations? The focus in the work specified in question 1 has been on establishing prosodic constituency with supported evidence from morphological constituency. Pertinent prosodic constituents are the prosodic (or phonological) word, the metrical foot, the syllable, and the mora (Selkirk, 1980). For example, the phonological behavior of certain affixes has been used to argue that they are word-internal prosodic words, which thus means that prosodic words may be recursive structures (e.g., Aronoff & Sridhar, 1987). Similarly, the shape of truncated words has been used as evidence for the shape of the metrical foot (cf., e.g., Alber & Arndt-Lappe, 2012). Question 2 considers morphologically conditioned metrical alternations. Stress alternations have received particular attention. Affixation processes differ in whether or not they exhibit stress alternations. Affixes that trigger stress alternations are commonly referred to as 'stress-shifting' affixes, those that do not are referred to as 'stress-preserving' affixes. The fact that morphological categories differ in their stress behavior has figured prominently in theoretical debates about the phonology-morphology interface, in particular between accounts that assume a stratal architecture with interleaving phonology-morphology modules (such as lexical phonology, esp. Kiparsky, 1982, 1985) and those that assume that morphological categories come with their own phonologies (e.g., Inkelas, Orgun, & Zoll, 1997; Inkelas & Zoll, 2007; Orgun, 1996). Question 3 looks at metrical variation and its relation to the processing of morphologically complex words. There is a growing body of recent empirical work showing that some metrical alternations seem variable (e.g., Collie, 2008; Dabouis, 2019). This means that different stress patterns occur within a single morphological category. Theoretical explanations of the phenomenon vary depending on the framework adopted. However, what unites pertinent research seems to be that the variation is codetermined by measures that are usually associated with lexical storage. These are semantic transparency, productivity, and measures of lexical frequency.
Huei-ling Lai and Yao-Ying Lai
Sentential meaning that emerges compositionally is not always transparent as one-to-one mapping from syntactic structure to semantic representation; oftentimes, the meaning is underspecified (morphosyntactically unsupported), not explicitly conveyed via overt linguistic devices. Compositional meaning is obtained during comprehension. The associated issues are explored by examining linguistic factors that modulate the construal of underspecified iterative meaning in Mandarin Chinese (MC). In this case, the factors include lexical aspect of verbs, the interval-lengths denoted by post-verbal durative adverbials, and boundary specificity denoted by preverbal versus post-verbal temporal adverbials. The composition of a punctual verb (e.g., jump, poke) with a durative temporal adverbial like Zhangsan tiao-le shi fenzhong. Zhangsan jump-LE ten minute ‘Zhangsan jumped for ten minutes’ engenders an iterative meaning, which is morphosyntactically absent yet fully understood by comprehenders. Contrastively, the counterpart involving a durative verb (e.g., run, swim) like Zhangsan pao-le shi fenzhong Zhangsan run-LE ten minute ‘Zhangsan ran for ten minutes’ engenders a continuous reading with identical syntactic structure. Psycholinguistically, processing such underspecified meaning in real time has been shown to require greater effort than the transparent counterpart. This phenomenon has been attested cross-linguistically; yet how it is manifested in MC, a tenseless language, remains understudied. In addition, durative temporal adverbials like yizhi/buduandi ‘continuously,’ which appear preverbally in MC, also engender an iterative meaning when composed with a punctual verb like Zhangsan yizhi/buduandi tiao. Zhangsan continuously jump ‘Zhangsan jumped continuously.’ Crucially, unlike the post-verbal adverbials that encode specific boundaries for the denoted intervals, these preverbal adverbials refer to continuous time spans without specific endpoints. The difference in boundary specificity between the two adverbial types, while both being durative, is hypothesized to modulate the processing profiles of aspectual comprehension. Results of the online (timed) questionnaire showed (a) an effect of boundary specificity: sentences with post-verbal adverbials that encode [+specific boundary] were rated lower in the naturalness-rating task and induced longer response time (RT) in iterativity judgements, as compared to preverbal adverbials that encode [−specific boundary]; (b) in composition with post-verbal adverbials that are [+specific boundary], sentences involving durative verbs elicited lower rating scores and longer RT of iterativity judgements than the counterpart involving punctual verbs. These suggest that the comprehension of underspecified iterative meaning is modulated by both cross-linguistically similar parameters and language-specific systems of temporal reference, by which MC exhibits a typological difference in processing profiles. Overall, the patterns are consistent with the Context-Dependence approach to semantic underspecification: comprehenders compute the ultimate reading (iterative versus continuous) by taking both the sentential and extra-sentential information into consideration in a given context.
Yu-Ying Chuang and R. Harald Baayen
Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.
Knut Tarald Taraldsen
This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 , 373–398).