Nikolai Trubetzkoy (1890–1938) was a Russian émigré scholar who settled in Austria in 1922, serving as Head of Slavic Linguistics at the University of Vienna and participating in the Prague Linguistics Circle. Trubetzkoy wrote nearly 150 works on phonology, prosody, comparative linguistics, linguistic geography, folklore, literature, history, and political theory. His posthumously published Grundzüge der Phonologie (Principles of Phonology) is regarded as one of the key works in the science of phonology. Here Trubetzkoy, influenced by Saussurean insights, elaborated on the linguistic function of speech sounds, the role of oppositions, and markedness. He was also concerned with developing universal laws of phonological patterning, and his work involves the discussion of a wide variety of languages. The Grundzüge became the classic statement of part of Prague School linguistics, which later influenced both European and American linguistics, notably in Chomsky and Halle’s The Sound Pattern of English. Less well-known are Trubetzkoy’s historical and political works on Eurasia and Eurasianism. In Europe and Mankind, Trubetzkoy argued that Russia was not culturally part of Europe but should evolve to form its own political systems based on its geography and common legacy with the peoples of Eurasia.
551-560 of 583 Results
Article
Nikolai Trubetzkoy
Edwin L. Battistella
Article
Speech Perception in Phonetics
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.
Article
Children’s Acquisition of Syntactic Knowledge
Rosalind Thornton
Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.
Article
Phonological Templates in Development
Marilyn May Vihman
Child phonological templates are idiosyncratic word production patterns. They can be understood as deriving, through generalization of patterning, from the very first words of the child, which are typically close in form to their adult targets. Templates can generally be identified only some time after a child’s first 20–50 words have been produced but before the child has achieved an expressive lexicon of 200 words. The templates appear to serve as a kind of ‘holding strategy’, a way for children to produce more complex adult word forms while remaining within the limits imposed by the articulatory, planning, and memory limitations of the early word period. Templates have been identified in the early words of children acquiring a number of languages, although not all children give clear evidence of using them. Within a given language we see a range of different templatic patterns, but these are nevertheless broadly shaped by the prosodic characteristics of the adult language as well as by the idiosyncratic production preferences of a given child; it is thus possible to begin to outline a typology of child templates. However, the evidence base for most languages remains small, ranging from individual diary studies to rare longitudinal studies of as many as 30 children. Thus templates undeniably play a role in phonological development, but their extent of use or generality remains unclear, their timing for the children who show them is unpredictable, and their period of sway is typically brief—a matter of a few weeks or months at most. Finally, the formal status and relationship of child phonological templates to adult grammars has so far received relatively little attention, but the closest parallels may lie in active novel word formation and in the lexicalization of commonly occurring expressions, both of which draw, like child templates, on the mnemonic effects of repetition.
Article
Speech Perception and Generalization Across Talkers and Accents
Kodi Weatherholtz and T. Florian Jaeger
The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).
Article
Communicative Repertoires in African Languages
Anne Storch
Even though the concept of multilingualism is well established in linguistics, it is problematic, especially in light of the actual ways in which repertoires are composed and used. The term “multilingualism” bears in itself the notion of several clearly discernable languages and suggests that regardless of the sociolinguistic setting, language ideologies, social history and context, a multilingual individual will be able to separate the various codes that constitute his or her communicative repertoire and use them deliberately in a reflected way. Such a perspective on language isn’t helpful in understanding any sociolinguistic setting and linguistic practice that is not a European one and that doesn’t correlate with ideologies and practices of a standardized, national language. This applies to the majority of people living on the planet and to most people who speak African languages. These speakers differ from the ideological concept of the “Western monolingual,” as they employ diverse practices and linguistic features on a daily basis and do so in a very flexible way. Which linguistic features a person uses thereby depends on factors such as socialization, placement, and personal interest, desires and preferences, which are all likely to change several times during a person’s life. Therefore, communicative repertoires are never stable, neither in their composition nor in the ways they are ideologically framed and evaluated. A more productive perspective on the phenomenon of complex communicative repertoires puts the concept of languaging in the center, which refers to communicative practices, dynamically operating between different practices and (multimodal) linguistic features. Individual speakers thereby perceive and evaluate ways of speaking according to the social meaning, emotional investment, and identity-constituting functions they can attribute to them. The fact that linguistic reflexivity to African speakers might almost always involve the negotiation of the self in a (post)colonial world invites us to consider a critical evaluation, based on approaches such as Southern Theory, of established concepts of “language” and “multilingualism”: languaging is also a postcolonial experience, and this experience often translates into how speakers single out specific ways of speaking as “more prestigious” or “more developed” than others. The inclusion of African metalinguistics and indigenuous knowledge consequently is an important task of linguists studying communicative repertoires in Africa or its diaspora.
Article
Korean Phonetics and Phonology
Young-mee Yu Cho
Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.
Article
Aphasia from a Neurolinguistic Perspective
Susan Edwards and Christos Salis
Aphasia is an acquired language disorder subsequent to brain damage in the left hemisphere. It is characterized by diminished abilities to produce and understand both spoken and written language compared with the speaker’s presumed ability pre-cerebral damage. The type and severity of the aphasia depends not only on the location and extent of the cerebral damage but also the effect the lesion has on connecting areas of the brain. Type and severity of aphasia is diagnosed in comparison with assumed normal adult language. Language changes associated with normal aging are not classed as aphasia. The diagnosis and assessment of aphasia in children, which is unusual, takes account of age norms.
The most common cause of aphasia is a cerebral vascular
accident (CVA) commonly referred to as a stroke, but brain damage following traumatic head injury such as road accidents or gunshot wounds can also cause aphasia. Aphasia following such traumatic events is non-progressive in contrast to aphasia arising from brain tumor, some types of infection, or language disturbances in progressive conditions such as Alzheimer’s disease, where the language disturbance increases as the disease progresses.
The diagnosis of primary progressive aphasia (as opposed to non-progressive aphasia, the main focus of this article) is based on the following inclusion and exclusion criteria by M. Marsel Mesulam, in 2001. Inclusion criteria are as follows: Difficulty with language that interferes with activities of daily living and aphasia is the most prominent symptom. Exclusion criteria are as follows: Other non-degenerative disease or medical disorder, psychiatric diagnosis, episodic memory, visual memory, and visuo-perceptual impairment, and, finally, initial behavioral disturbance.
Aphasia involves one or more of the building blocks of language, phonemes, morphology, lexis, syntax, and semantics; and the deficits occur in various clusters or patterns across the spectrum. The degree of impairment varies across modalities, with written language often, but not always, more affected than spoken language. In some cases, understanding of language is relatively preserved, in others both production and understanding are affected. In addition to varied degrees of impairment in spoken and written language, any or more than one component of language can be affected. At the most severe end of the spectrum, a person with aphasia may be unable to communicate by either speech or writing and may be able to understand virtually nothing or only very limited social greetings. At the least severe end of the spectrum, the aphasic speaker may experience occasional word finding difficulties, often difficulties involving nouns; but unlike difficulties in recalling proper nouns in normal aging, word retrieval problems in mild aphasia includes other word classes.
Descriptions of different clusters of language deficits have led to the notion of syndromes. Despite great variations in the condition, patterns of language deficits associated with different areas of brain damage have been influential in understanding language-brain relationships. Increasing sophistication in language assessment and neurological investigations are contributing to a greater, yet still incomplete understanding of language-brain relationships.
Article
Dene-Yeniseian
Edward Vajda
Dene-Yeniseian is a proposed genealogical link between the widespread North American language family Na-Dene (Athabaskan, Eyak, Tlingit) and Yeniseian in central Siberia, represented today by the critically endangered Ket and several documented extinct relatives. The Dene-Yeniseian hypothesis is an old idea, but since 2006 new evidence supporting it has been published in the form of shared morphological systems and a modest number of lexical cognates showing interlocking sound correspondences. Recent data from human genetics and folklore studies also increasingly indicate the plausibility of a prehistoric (probably Late Pleistocene) connection between populations in northwestern North America and the traditionally Yeniseian-speaking areas of south-central Siberia. At present, Dene-Yeniseian cannot be accepted as a proven language family until the purported evidence supporting the lexical and morphological correspondences between Yeniseian and Na-Dene is expanded and tested by further critical analysis and their relationship to Old World families such as Sino-Tibetan and Caucasian, as well as the isolate Burushaski (all earlier proposed as relatives of Yeniseian, and sometimes also of Na-Dene), becomes clearer.
Article
Generative Grammar
Knut Tarald Taraldsen
This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 [1970], 373–398).