1-9 of 9 Results

  • Keywords: language processing x
Clear all

Article

The distinction between representations and processes is central to most models of the cognitive science of language. Linguistic theory informs the types of representations assumed, and these representations are what are taken to be the targets of second language acquisition. Epistemologically, this is often taken to be knowledge, or knowledge-that. Techniques such as Grammaticality Judgment tasks are paradigmatic as we seek to gain insight into what a learner’s grammar looks like. Learners behave as if certain phonological, morphological, or syntactic strings (which may or may not be target-like) were well-formed. It is the task of the researcher to understand the nature of the knowledge that governs those well-formedness beliefs. Traditional accounts of processing, on the other hand, look to the real-time use of language, either in production or perception, and invoke discussions of skill or knowledge-how. A range of experimental psycholinguistic techniques have been used to assess these skills: self-paced reading, eye-tracking, ERPs, priming, lexical decision, AXB discrimination, and the like. Such online measures can show us how we “do” language when it comes to activities such as production or comprehension. There has long been a connection between linguistic theory and theories of processing as evidenced by the work of Berwick (The Grammatical Basis of Linguistic Performance). The task of the parser is to assign abstract structure to a phonological, morphological, or syntactic string; structure that does not come directly labeled in the acoustic input. Such processing studies as the Garden Path phenomenon have revealed that grammaticality and processability are distinct constructs. In some models, however, the distinction between grammar and processing is less distinct. Phillips says that “parsing is grammar,” while O’Grady builds an emergentist theory with no grammar, only processing. Bayesian models of acquisition, and indeed of knowledge, assume that the grammars we set up are governed by a principle of entropy, which governs other aspects of human behavior; knowledge and skill are combined. Exemplar models view the processing of the input as a storing of all phonetic detail that is in the environment, not storing abstract categories; the categories emerge via a process of comparing exemplars. Linguistic theory helps us to understand the processing of input to acquire new L2 representations, and the access of those representations in real time.

Article

Michael Ramscar

Healthy aging is associated with many cognitive, linguistic, and behavioral changes. For example, adults’ reaction times slow on many tasks as they grow older, while their memories, appear to fade, especially for apparently basic linguistic information such as other people’s names. These changes have traditionally been thought to reflect declines in the processing power of human minds and brains as they age. However, from the perspective of the information-processing paradigm that dominates the study of mind, the question of whether cognitive processing capacities actually decline across the life span can only be scientifically answered in relation to functional models of the information processes that are presumed to be involved in cognition. Consider, for example, the problem of recalling someone’s name. We are usually reminded of the names of friends on a regular basis, and this makes us good at remembering them. However, as we move through life, we inevitably learn more names. Sometimes we hear these new names only once. As we learn each new name, the average exposure we will have had to any individual name we know is likely to decline, while the number of different names we know is likely to increase. This in turn is likely to make the task of recalling a particular name more complex. One consequence of this is as follows: If Mary can only recall names with 95% accuracy at age 60—when she knows 900 names—does she necessarily have a worse memory than she did at age 16, when she could recall any of only 90 names with 98% accuracy? Answering the question of whether Mary’s memory for names has actually declined (or improved even) will require some form of quantification of Mary’s knowledge of names at any given point in her life and the definition of a quantitative model that predicts expected recall performance for a given amount of name knowledge, as well as an empirical measure of the accuracy of the model across a wide range of circumstances. Until the early 21st century, the study of cognition and aging was dominated by approaches that failed to meet these requirements. Researchers simply established that Mary’s name recall was less accurate at a later age than it was at an earlier one, and took this as evidence that Mary’s memory processes had declined in some significant way. However, as computational approaches to studying cognitive—and especially psycholinguistic—processes and processing became more widespread, a number of matters related to the development of processing across the life span began to become apparent: First, the complexity involved in establishing whether or not Mary’s name recall did indeed become less accurate with age began to be better understood. Second, when the impact of learning on processing was controlled for, it became apparent that at least some processes showed no signs of decline at all in healthy aging. Third, the degree to which the environment—both in terms of its structure, and its susceptibility to change—further complicates our understanding of life-span cognitive performance also began to be better comprehended. These new findings not only promise to change our understanding of healthy cognitive aging, but also seem likely to alter our conceptions of cognition and language themselves.

Article

A computational learner needs three things: Data to learn from, a class of representations to acquire, and a way to get from one to the other. Language acquisition is a very particular learning setting that can be defined in terms of the input (the child’s early linguistic experience) and the output (a grammar capable of generating a language very similar to the input). The input is infamously impoverished. As it relates to morphology, the vast majority of potential forms are never attested in the input, and those that are attested follow an extremely skewed frequency distribution. Learners nevertheless manage to acquire most details of their native morphologies after only a few years of input. That said, acquisition is not instantaneous nor is it error-free. Children do make mistakes, and they do so in predictable ways which provide insights into their grammars and learning processes. The most elucidating computational model of morphology learning from the perspective of a linguist is one that learns morphology like a child does, that is, on child-like input and along a child-like developmental path. This article focuses on clarifying those aspects of morphology acquisition that should go into such an elucidating a computational model. Section 1 describes the input with a focus on child-directed speech corpora and input sparsity. Section 2 discusses representations with focuses on productivity, developmental paths, and formal learnability. Section 3 surveys the range of learning tasks that guide research in computational linguistics and NLP with special focus on how they relate to the acquisition setting. The conclusion in Section 4 presents a summary of morphology acquisition as a learning problem with Table 4 highlighting the key takeaways of this article.

Article

Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.

Article

Romanian has features which distinguish it from other Romance languages. These can be attributed to its geographical location on the periphery of the Romance area, and to its having evolved independently and through contact with different languages. Until the early decades of the 19th century, loans and calques based on Slav(on)ic, Hungarian, Turkish, and Greek models influenced Romanian in several respects, including its word-formation patterns. Subsequent enrichment by means of numerous loans and calques from French, Italian, and (Neo-)Latin has been an important force in the re-Romanization and modernization of Romanian. In recent decades English word-formation models have also exercised a strong influence. The wide range of etymological sources and their historical stratification have meant that Romanian has a much richer inventory of affixes and allomorphs than other Romance languages. The possibility of combining bases and affixes entering Romanian from different sources at different periods and related to different registers has been exploited to create nonce formations with ironic connotations and greater expressivity. Of all Romance languages, Romanian is certainly the most interesting for the study of borrowing of affixes and of word-formation patterns. The most important characteristics distinguishing Romanian from other Romance languages are: the limited productivity of the V-N compounding pattern; the formation of compound numerals; the high number of prefixes, suffixes, and their allomorphs; the presence of a complex system of morphophonological alternations in suffixation; the many gender-marking suffixes; and the systematic and prevalent recourse to -re suffixation and to conversion of the supine to form action nouns, and to adjective conversion to form adverbs.

Article

Mineharu Nakayama

The Japanese psycholinguistics research field is moving rapidly in many different directions as it includes various sub-linguistics fields (e.g., phonetics/phonology, syntax, semantics, pragmatics, discourse studies). Naturally, diverse studies have reported intriguing findings that shed light on our language mechanism. This article presents a brief overview of some of the notable early 21st century studies mainly from the language acquisition and processing perspectives. The topics are divided into various sections: the sound system, the script forms, reading and writing, morpho-syntactic studies, word and sentential meanings, and pragmatics and discourse studies sections. Studies on special populations are also mentioned. Studies on the Japanese sound system have advanced our understanding of L1 and L2 (first and second language) acquisition and processing. For instance, more evidence is provided that infants form adult-like phonological grammar by 14 months in L1, and disassociation of prosody is reported from one’s comprehension in L2. Various cognitive factors as well as L1 influence the L2 acquisition process. As the Japanese language users employ three script forms (hiragana, katakana, and kanji) in a single sentence, orthographic processing research reveal multiple pathways to process information and the influence of memory. Adult script decoding and lexical processing has been well studied and research data from special populations further helps us to understand our vision-to-language mapping mechanism. Morpho-syntactic and semantic studies include a long debate on the nativist (generative) and statistical learning approaches in L1 acquisition. In particular, inflectional morphology and quantificational scope interaction in L1 acquisition bring pros and cons of both approaches as a single approach. Investigating processing mechanisms means studying cognitive/perceptual devices. Relative clause processing has been well-discussed in Japanese because Japanese has a different word order (SOV) from English (SVO), allows unpronounced pronouns and pre-verbal word permutations, and has no relative clause marking at the verbal ending (i.e., morphologically the same as the matrix ending). Behavioral and neurolinguistic data increasingly support incremental processing like SVO languages and an expectancy-driven processor in our L1 brain. L2 processing, however, requires more study to uncover its mechanism, as the literature is scarce in both L2 English by Japanese speakers and L2 Japanese by non-Japanese speakers. Pragmatic and discourse processing is also an area that needs to be explored further. Despite the typological difference between English and Japanese, the studies cited here indicate that our acquisition and processing devices seem to adjust locally while maintaining the universal mechanism.

Article

Knut Tarald Taraldsen

This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 [1970], 373–398).

Article

Neurolinguistic approaches to morphology include the main theories of morphological representation and processing in the human mind, such as full-listing, full-parsing, and hybrid dual-route models, and how the experimental evidence that has been acquired to support these theories uses different neurolinguistic paradigms (visual and auditory priming, violation, long-lag priming, picture-word interference, etc.) and methods (electroencephalography [EEG]/event-related brain potential [ERP], functional magnetic resonance imaging [fMRI], neuropsychology, and so forth).

Article

Yu-Ying Chuang and R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.