21-30 of 92 Results  for:

  • Phonetics/Phonology x
Clear all

Article

Adrian P. Simpson and Melanie Weirich

Speech carries a wealth of information about the speaker aside from any verbal message ranging from emotional state (sad, happy, bored, etc.) to illness (e.g., cold). Central features are a speaker’s gender and their sexual orientation. In part this is an inevitable product of differences in speakers’ anatomical dimensions, for example on average males have lower pitched voices than females due to longer, thicker vocal cords that vibrate more slowly. Arguably much more information has been learned by a speaker as they construct their gender or identify with a particular sexual orientation. Differences in speech already begin in young children, before any marked gender-related anatomical differences develop, emphasizing the importance of behavioral patterns. Gender, gender identity, and sexual orientation are encoded in speech in a range of different phonetic parameters relating to both phonation (activity of the vocal folds) and articulation (dimensions and configuration of the supraglottal cavities), as well as the use of pitch patterns and differences in voice quality (the way in which the vocal folds vibrate). Differences in the size and configuration of the supraglottal cavities give rise to differences in the size of the acoustic vowel space as well as subtle differences in the production of individual sounds, such as the sibilant [s]. Furthermore, significant and systematic gender-specific differences have been found in the average duration of utterances and individual sounds, which in turn have been found to have a complex relationship to the perception of tempo.

Article

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.

Article

Alexis Michaud and Bonny Sands

Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.

Article

Martha Tyrone

Sign phonetics is the study of how sign languages are produced and perceived, by native as well as by non-native signers. Most research on sign phonetics has focused on American Sign Language (ASL), but there are many different sign languages around the world, and several of these, including British Sign Language, Taiwan Sign Language, and Sign Language of the Netherlands, have been studied at the level of phonetics. Sign phonetics research can focus on individual lexical signs or on the movements of the nonmanual articulators that accompany those signs. The production and perception of a sign language can be influenced by phrase structure, linguistic register, the signer’s linguistic background, the visual perception mechanism, the anatomy and physiology of the hands and arms, and many other factors. What sets sign phonetics apart from the phonetics of spoken languages is that the two language modalities use different mechanisms of production and perception, which could in turn result in structural differences between modalities. Most studies of sign phonetics have been based on careful analyses of video data. Some studies have collected kinematic limb movement data during signing and carried out quantitative analyses of sign production related to, for example, signing rate, phonetic environment, or phrase position. Similarly, studies of sign perception have recorded participants’ ability to identify and discriminate signs, depending, for example, on slight variations in the signs’ forms or differences in the participants’ language background. Most sign phonetics research is quantitative and lab-based.

Article

Amalia Arvaniti

Prosody is an umbrella term used to cover a variety of interconnected and interacting phenomena, namely stress, rhythm, phrasing, and intonation. The phonetic expression of prosody relies on a number of parameters, including duration, amplitude, and fundamental frequency (F0). The same parameters are also used to encode lexical contrasts (such as tone), as well as paralinguistic phenomena (such as anger, boredom, and excitement). Further, the exact function and organization of the phonetic parameters used for prosody differ across languages. These considerations make it imperative to distinguish the linguistic phenomena that make up prosody from their phonetic exponents, and similarly to distinguish between the linguistic and paralinguistic uses of the latter. A comprehensive understanding of prosody relies on the idea that speech is prosodically organized into phrasal constituents, the edges of which are phonetically marked in a number of ways, for example, by articulatory strengthening in the beginning and lengthening at the end. Phrases are also internally organized either by stress, that is around syllables that are more salient relative to others (as in English and Spanish), or by the repetition of a relatively stable tonal pattern over short phrases (as in Korean, Japanese, and French). Both types of organization give rise to rhythm, the perception of speech as consisting of groups of a similar and repetitive pattern. Tonal specification over phrases is also used for intonation purposes, that is, to mark phrasal boundaries, and express information structure and pragmatic meaning. Taken together, the components of prosody help with the organization and planning of speech, while prosodic cues are used by listeners during both language acquisition and speech processing. Importantly, prosody does not operate independently of segments; rather, it profoundly affects segment realization, making the incorporation of an understanding of prosody into experimental design essential for most phonetic research.

Article

Gunnar Hansson

The term consonant harmony refers to a class of systematic sound patterns, in which consonants interact in some assimilatory way even though they are not adjacent to each other in the word. Such long-distance assimilation can sometimes hold across a significant stretch of intervening vowels and consonants, such as in Samala (Ineseño Chumash) /s-am-net-in-waʃ/ → [ʃamnetiniwaʃ] “they did it to you”, where the alveolar sibilant /s‑/ of the 3.sbj prefix assimilates to the postalveolar sibilant /ʃ/ of the past suffix /‑waʃ/ across several intervening syllables that contain a variety of non-sibilant consonants. While consonant harmony most frequently involves coronal-specific contrasts, like in the Samala case, there are numerous cases of assimilation in other phonological properties, such as laryngeal features, nasality, secondary articulation, and even constriction degree. Not all cases of consonant harmony result in overt alternations, like the [s] ∼ [ʃ] alternation in the Samala 3.sbj prefix. Sometimes the harmony is merely a phonotactic restriction on the shape of morphemes (roots) within the lexicon. Consonant harmony tends to implicate only some group (natural class) of consonants that already share a number of features, and are hence relatively similar, while ignoring less similar consonants. The distance between the potentially interacting consonants can also play a role. For example, in many cases assimilation is limited to relatively short-distance ‘transvocalic’ contexts (. . . CVC. . . ), though the interpretation of such locality restrictions remains a matter of debate. Consonants that do not directly participate in the harmony (as triggers or undergoers of assimilation) are typically neutral and transparent, allowing the assimilating property to be propagated across them. However, this is not universally true; in recent years several cases have come to light in which certain segments can act as blockers when they intervene between a potential trigger-target pair. The main significance of consonant harmony for linguistic theory lies in its apparently non-local character and the challenges that this poses for theories of phonological representations and processes, as well as for formal models of phonological learning. Along with other types of long-distance dependencies in segmental phonology (e.g., long-distance dissimilation, and vowel harmony systems with one or more transparent vowels), sound patterns of consonant harmony have contributed to the development of many theoretical constructs, such as autosegmental (nonlinear) representations, feature geometry, underspecification, feature spreading, strict locality (vs. ‘gapped’ representations), parametrized visibility, agreement constraints, and surface correspondence relations. The formal analysis of long-distance assimilation (and dissimilation) remains a rich and vibrant area of theoretical research. The empirical base for such theoretical inquiry also continues to be expanded. On the one hand, previously undocumented cases (or new, surprising details of known cases) continue to be added to the corpus of attested consonant harmony patterns. On the other hand, artificial phonology learning experiments allow the properties of typologically rare or unattested patterns to be explored in a controlled laboratory setting.

Article

Martin Maiden

Dalmatian is an extinct group of Romance varieties spoken on the eastern Adriatic seaboard, best known from its Vegliote variety, spoken on the island of Krk (also called Veglia). Vegliote is principally represented by the linguistic testimony of its last speaker, Tuone Udaina, who died at the end of the 19th century. By the time Udaina’s Vegliote could be explored by linguists (principally by Matteo Bartoli), it seems that he had no longer actively spoken the language for decades, and his linguistic testimony is imperfect, in that it is influenced for example by the Venetan dialect that he habitually spoke. Nonetheless, his Vegliote reveals various distinctive and recurrent linguistic traits, notably in the domain of phonology (for example, pervasive and complex patterns of vowel diphthongization) and morphology (notably a general collapse of the general Romance inflexional system of tense and mood morphology, but also an unusual type of synthetic future form).

Article

Geoffrey K. Pullum

English is both the most studied of the world’s languages and the most widely used. It comes closer than any other language to functioning as a world communication medium and is very widely used for governmental purposes. This situation is the result of a number of historical accidents of different magnitudes. The linguistic properties of the language itself would not have motivated its choice (contra the talk of prescriptive usage writers who stress the clarity and logic that they believe English to have). Divided into multiple dialects, English has a phonological system involving remarkably complex consonant clusters and a large inventory of distinct vowel nuclei; a bad, confusing, and hard-to-learn alphabetic orthography riddled with exceptions, ambiguities, and failures of the spelling to correspond to the pronunciation; a morphology that is rather more complex than is generally appreciated, with seven or eight paradigm patterns and a couple of hundred irregular verbs; a large multilayered lexicon containing roots of several quite distinct historical sources; and a syntax that despite its very widespread SVO (Subject-Verb-Object) basic order in the clause is replete with tricky details. For example, there are crucial restrictions on government of prepositions, many verb-preposition idioms, subtle constraints on the intransitive prepositions known as “particles,” an important distinction between two (or under a better analysis, three) classes of verb that actually have different syntax, and a host of restrictions on the use of its crucial “wh-words.” It is only geopolitical and historical accidents that have given English its enormous importance and prestige in the world, not its inherent suitability for its role.

Article

Irina Monich

Tone is indispensable for understanding many morphological systems of the world. Tonal phenomena may serve the morphological needs of a language in a variety of ways: segmental affixes may be specified for tone just like roots are; affixes may have purely tonal exponents that associate to segmental material provided by other morphemes; affixes may consist of tonal melodies, or “templates”; and tonal processes may apply in a way that is sensitive to morphosyntactic boundaries, delineating word-internal structure. Two behaviors set tonal morphemes apart from other kinds of affixes: their mobility and their ability to apply phrasally (i.e., beyond the limits of the word). Both floating tones and tonal templates can apply to words that are either phonologically grouped with the word containing the tonal morpheme or syntactically dependent on it. Problems generally associated with featural morphology are even more acute in regard to tonal morphology because of the vast diversity of tonal phenomena and the versatility with which the human language faculty puts pitch to use. The ambiguity associated with assigning a proper role to tone in a given morphological system necessitates placing further constraints on our theory of grammar. Perhaps more than any other morphological phenomena, grammatical tone exposes an inadequacy in our understanding both of the relationship between phonological and morphological modules of grammar and of the way that phonology may reference morphological information.

Article

This article discusses several important phonological issues concerning subtractive processes in morphology. First, this article addresses the scope of subtractive processes that linguistic theories should be concerned with. Many subtractive processes fall in the realm of grammatical theories. Subsequently, previous processual and affixal approaches to subtractive morphology and nonconcatenative allomorphy are reviewed. Then, theoretical restrictiveness is taken up. Proponents of the affixal view often claim that it is more restrictive than the processual view, but their argument is not convincing. We do not know enough to discuss theoretical restrictiveness. Finally, earlier analyses of subtractive morphology in parallel and serial Optimality Theory are reviewed. We have not accomplished enough in this respect, so no conclusive choice of parallelism or serialism is possible at present. As a whole, there are too many unsettled matters to conclude about the nature of subtractive processes in morphology.