Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.
Yu-Ying Chuang and R. Harald Baayen
Basilio Calderone and Vito Pirrelli
Nowadays, computer models of human language are instrumental to millions of people, who use them every day with little if any awareness of their existence and role. Their exponential development has had a huge impact on daily life through practical applications like machine translation or automated dialogue systems. It has also deeply affected the way we think about language as an object of scientific inquiry. Computer modeling of Romance languages has helped scholars develop new theoretical frameworks and new ways of looking at traditional approaches. In particular, computer modeling of lexical phenomena has had a profound influence on some fundamental issues in human language processing, such as the purported dichotomy between rules and exceptions, or grammar and lexicon, the inherently probabilistic nature of speakers’ perception of analogy and word internal structure, and their ability to generalize to novel items from attested evidence. Although it is probably premature to anticipate and assess the prospects of these models, their current impact on language research can hardly be overestimated. In a few years, data-driven assessment of theoretical models is expected to play an irreplaceable role in pacing progress in all branches of language sciences, from typological and pragmatic approaches to cognitive and formal ones.
Daniel Schmidtke and Victor Kuperman
Lexical representations in an individual mind are not given to direct scrutiny. Thus, in their theorizing of mental representations, researchers must rely on observable and measurable outcomes of language processing, that is, perception, production, storage, access, and retrieval of lexical information. Morphological research pursues these questions utilizing the full arsenal of analytical tools and experimental techniques that are at the disposal of psycholinguistics. This article outlines the most popular approaches, and aims to provide, for each technique, a brief overview of its procedure in experimental practice. Additionally, the article describes the link between the processing effect(s) that the tool can elicit and the representational phenomena that it may shed light on. The article discusses methods of morphological research in the two major human linguistic faculties—production and comprehension—and provides a separate treatment of spoken, written and sign language.
Claudia Marzi and Vito Pirrelli
Over the past decades, psycholinguistic aspects of word processing have made a considerable impact on views of language theory and language architecture. In the quest for the principles governing the ways human speakers perceive, store, access, and produce words, inflection issues have provided a challenging realm of scientific inquiry, and a battlefield for radically opposing views. It is somewhat ironic that some of the most influential cognitive models of inflection have long been based on evidence from an inflectionally impoverished language like English, where the notions of inflectional regularity, (de)composability, predictability, phonological complexity, and default productivity appear to be mutually implied. An analysis of more “complex” inflection systems such as those of Romance languages shows that this mutual implication is not a universal property of inflection, but a contingency of poorly contrastive, nearly isolating inflection systems. Far from presenting minor faults in a solid, theoretical edifice, Romance evidence appears to call into question the subdivision of labor between rules and exceptions, the on-line processing vs. long-term memory dichotomy, and the distinction between morphological processes and lexical representations. A dynamic, learning-based view of inflection is more compatible with this data, whereby morphological structure is an emergent property of the ways inflected forms are processed and stored, grounded in universal principles of lexical self-organization and their neuro-functional correlates.