Speakers of most languages comprehend and produce a very large number of morphologically complex words. But how? There is a tension between two facts. First, speakers can comprehend and produce novel words, which they have never experienced and therefore could not have stored in memory. For example, English speakers readily generate the plural form of wug. These novel words often look like they are composed of recognizable parts, such as the plural marker -s. Second, speakers also comprehend and produce many words that cannot be straightforwardly decomposed into parts, such as bought or brunch. Morphology is the paradigm example of a quasi-regular domain, full of only partially productive, exception-ridden patterns, many of which nonetheless appear to be learned and used by speakers and listeners. Quasi-regularity has made morphology a fruitful testing ground for alternative views of how the mind works. Every major approach to the nature of the mind has attempted to tackle morphological processing. These approaches range from symbolic rule-based approaches to connectionist networks of simple neuron-like processing units to clouds of richly specified holistic exemplars. They vary in their assumptions about the nature of mental representations; particularly, those comprising long-term memory of language. They also vary in the computations that the mind is thought to perform; including the computations that are performed by a speaker attempting to produce or comprehend a word. In challenging all major approaches to cognition with its intricate patterns, morphology continues to provide a valuable window onto the nature of the mind.
Computational psycholinguistics has a long history of investigation and modeling of morphological phenomena. Several computational models have been developed to deal with the processing and production of morphologically complex forms and with the relation between linguistic morphology and psychological word representations. Historically, most of this work has focused on modeling the production of inflected word forms, leading to the development of models based on connectionist principles and other data-driven models such as Memory-Based Language Processing (MBLP), Analogical Modeling of Language (AM), and Minimal Generalization Learning (MGL). In the context of inflectional morphology, these computational approaches have played an important role in the debate between single and dual mechanism theories of cognition. Taking a different angle, computational models based on distributional semantics have been proposed to account for several phenomena in morphological processing and composition. Finally, although several computational models of reading have been developed in psycholinguistics, none of them have satisfactorily addressed the recognition and reading aloud of morphologically complex forms.
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.