1-10 of 28 Results  for:

  • Cognitive Science x
Clear all

Article

Philip Rubin

Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis. Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.

Article

Words are the backbone of language activity. An average 20-year-old native speaker of English will have a vocabulary of about 42,000 words. These words are connected with one another within the larger network of lexical knowledge that is termed the mental lexicon. The metaphor of a mental lexicon has played a central role in the development of theories of language and mind and has provided an intellectual meeting ground for psychologists, neurolinguists, and psycholinguists. Research on the mental lexicon has shown that lexical knowledge is not static. New words are acquired throughout the life span, creating very large increases in the richness of connectivity within the lexical system and changing the system as a whole. Because most people in the world speak more than one language, the default mental lexicon may be a multilingual one. Such a mental lexicon differs substantially from a lexicon of an individual language and would lead to the creation of new integrated lexical systems due to the pressure on the system to organize and access lexical knowledge in a homogenous manner. The mental lexicon contains both word knowledge and morphological knowledge. There is also evidence that it contains multiword strings such as idioms and lexical bundles. This speaks in support of a nonrestrictive “big tent” view of units of representation within the mental lexicon. Changes in research on lexical representations in language processing have emphasized lexical action and the role of learning. Although the metaphor of words as distinct representations within a lexical store has served to advance knowledge, it is more likely that words are best seen as networks of activity that are formed and affected by experience and learning throughout the life span.

Article

Michael Ramscar

Healthy aging is associated with many cognitive, linguistic, and behavioral changes. For example, adults’ reaction times slow on many tasks as they grow older, while their memories, appear to fade, especially for apparently basic linguistic information such as other people’s names. These changes have traditionally been thought to reflect declines in the processing power of human minds and brains as they age. However, from the perspective of the information-processing paradigm that dominates the study of mind, the question of whether cognitive processing capacities actually decline across the life span can only be scientifically answered in relation to functional models of the information processes that are presumed to be involved in cognition. Consider, for example, the problem of recalling someone’s name. We are usually reminded of the names of friends on a regular basis, and this makes us good at remembering them. However, as we move through life, we inevitably learn more names. Sometimes we hear these new names only once. As we learn each new name, the average exposure we will have had to any individual name we know is likely to decline, while the number of different names we know is likely to increase. This in turn is likely to make the task of recalling a particular name more complex. One consequence of this is as follows: If Mary can only recall names with 95% accuracy at age 60—when she knows 900 names—does she necessarily have a worse memory than she did at age 16, when she could recall any of only 90 names with 98% accuracy? Answering the question of whether Mary’s memory for names has actually declined (or improved even) will require some form of quantification of Mary’s knowledge of names at any given point in her life and the definition of a quantitative model that predicts expected recall performance for a given amount of name knowledge, as well as an empirical measure of the accuracy of the model across a wide range of circumstances. Until the early 21st century, the study of cognition and aging was dominated by approaches that failed to meet these requirements. Researchers simply established that Mary’s name recall was less accurate at a later age than it was at an earlier one, and took this as evidence that Mary’s memory processes had declined in some significant way. However, as computational approaches to studying cognitive—and especially psycholinguistic—processes and processing became more widespread, a number of matters related to the development of processing across the life span began to become apparent: First, the complexity involved in establishing whether or not Mary’s name recall did indeed become less accurate with age began to be better understood. Second, when the impact of learning on processing was controlled for, it became apparent that at least some processes showed no signs of decline at all in healthy aging. Third, the degree to which the environment—both in terms of its structure, and its susceptibility to change—further complicates our understanding of life-span cognitive performance also began to be better comprehended. These new findings not only promise to change our understanding of healthy cognitive aging, but also seem likely to alter our conceptions of cognition and language themselves.

Article

Over the past decades, psycholinguistic aspects of word processing have made a considerable impact on views of language theory and language architecture. In the quest for the principles governing the ways human speakers perceive, store, access, and produce words, inflection issues have provided a challenging realm of scientific inquiry, and a battlefield for radically opposing views. It is somewhat ironic that some of the most influential cognitive models of inflection have long been based on evidence from an inflectionally impoverished language like English, where the notions of inflectional regularity, (de)composability, predictability, phonological complexity, and default productivity appear to be mutually implied. An analysis of more “complex” inflection systems such as those of Romance languages shows that this mutual implication is not a universal property of inflection, but a contingency of poorly contrastive, nearly isolating inflection systems. Far from presenting minor faults in a solid, theoretical edifice, Romance evidence appears to call into question the subdivision of labor between rules and exceptions, the on-line processing vs. long-term memory dichotomy, and the distinction between morphological processes and lexical representations. A dynamic, learning-based view of inflection is more compatible with this data, whereby morphological structure is an emergent property of the ways inflected forms are processed and stored, grounded in universal principles of lexical self-organization and their neuro-functional correlates.

Article

Cognitive semantics (CS) is an approach to the study of linguistic meaning. It is based on the assumption that the human linguistic capacity is part of our cognitive abilities, and that language in general and meaning in particular can therefore be better understood by taking into account the cognitive mechanisms that control the conceptual and perceptual processing of extra-linguistic reality. Issues central to CS are (a) the notion of prototype and its role in the description of language, (b) the nature of linguistic meaning, and (c) the functioning of different types of semantic relations. The question concerning the nature of meaning is an issue that is particularly controversial between CS on the one hand and structuralist and generative approaches on the other hand: is linguistic meaning conceptual, that is, part of our encyclopedic knowledge (as is claimed by CS), or is it autonomous, that is, based on abstract and language-specific features? According to CS, the most important types of semantic relations are metaphor, metonymy, and different kinds of taxonomic relations, which, in turn, can be further broken down into more basic associative relations such as similarity, contiguity, and contrast. These play a central role not only in polysemy and word formation, that is, in the lexicon, but also in the grammar.

Article

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.

Article

Tyler Peterson

Broadly defined, mirativity is the linguistic term often used to describe utterances that speakers use to express their surprise at some unexpected state, event, or activity they experience. As an illustration, imagine the following scenario: rain is an infrequent occurrence in the Arizona desert, and the news forecast predicts another typically long stretch of sunny weather. Wanda and her colleague are planning a hike in the mountains that afternoon. Aware of this prediction, and being familiar with the typical desert climate, they step outside into the pouring rain. This elicits the surprise of Wanda: based on the weather forecast and coupled with her background knowledge, the rain is an unexpected event. As such, Wanda has a number of linguistic options for expressing her surprise to her colleague; for example, Wow, it’s raining!It’s raining!No way, it’s raining?(!)I can’t believe it’s raining(!)I see it’s raining(!)It looks like it’s raining(!)Look at all this rain(!) These utterances provide a sample of the diverse lexical and grammatical strategies a speaker of English can deploy in order to express surprise at an unexpected event, including expressive particles such as wow and no way, surprised intonational contours (orthographically represented by the exclamation mark ‘!’), rhetorical questions, expressions of disbelief, and evidential verbs such as look and see. When we look across the world’s languages we find that there is considerable intra- and cross-linguistic diversity in how mirative meanings are linguistically expressed. The examples above show how English lacks specific morphology dedicated to mirativity; however, the focus of this article is on the role morphology plays in the expression of mirative meanings.

Article

Petar Milin and James P. Blevins

Studies of the structure and function of paradigms are as old as the Western grammatical tradition. The central role accorded to paradigms in traditional approaches largely reflects the fact that paradigms exhibit systematic patterns of interdependence that facilitate processes of analogical generalization. The recent resurgence of interest in word-based models of morphological processing and morphological structure more generally has provoked a renewed interest in paradigmatic dimensions of linguistic structure. Current methods for operationalizing paradigmatic relations and determining the behavioral correlates of these relations extend paradigmatic models beyond their traditional boundaries. The integrated perspective that emerges from this work is one in which variation at the level of individual words is not meaningful in isolation, but rather guides the association of words to paradigmatic contexts that play a role in their interpretation.

Article

Daniel Schmidtke and Victor Kuperman

Lexical representations in an individual mind are not given to direct scrutiny. Thus, in their theorizing of mental representations, researchers must rely on observable and measurable outcomes of language processing, that is, perception, production, storage, access, and retrieval of lexical information. Morphological research pursues these questions utilizing the full arsenal of analytical tools and experimental techniques that are at the disposal of psycholinguistics. This article outlines the most popular approaches, and aims to provide, for each technique, a brief overview of its procedure in experimental practice. Additionally, the article describes the link between the processing effect(s) that the tool can elicit and the representational phenomena that it may shed light on. The article discusses methods of morphological research in the two major human linguistic faculties—production and comprehension—and provides a separate treatment of spoken, written and sign language.

Article

Laurie Beth Feldman and Judith F. Kroll

We summarize findings from across a range of methods, including behavioral measures of overall processing speed and accuracy, electrophysiological indices that tap into the early time course of language processing, and neural measures using structural and functional imaging. We argue that traditional claims about rigid constraints on the ability of late bilinguals to exploit the meaning and form of the morphology and morphosyntax in a second language should be revised so as to move away from all or none command of structures motivated from strict dichotomies among linguistic categories of morphology. We describe how the dynamics of morphological processing in neither monolingual or bilingual speakers is easily characterized in terms of the potential to decompose words into their constituent morphemes and that morphosyntactic processing is not easily characterized in terms of categories of structures that are learnable and those that are unlearnable by bilingual and nonnative speakers. Instead, we emphasize the high degree of variability across individuals and plasticity within individuals in their ability to successfully learn and use even subtle aspects of a second language. Further, both of the bilingual’s two languages become active when even one language is engaged, and parallel activation has consequences that shape both languages, thus their influence is not in the unidirectional manner that was traditionally assumed. We briefly discuss the nature of possible constraints and directions for future research.