61-70 of 109 Results  for:

  • Phonetics/Phonology x
Clear all


Bracketing Paradoxes in Morphology  

Heather Newell

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.


Phonetics of Vowels  

Christine Ericsdotter Nordgren

Speech sounds are commonly divided into two main categories in human languages: vowels, such as ‘e’, ‘a’, ‘o’, and consonants, such as ‘k’, ‘n’, ‘s’. This division is made on the basis of both phonetic and phonological principles, which is useful from a general linguistic point of view but problematic for detailed description and analysis. The main differences between vowels and consonants are that (1) vowels are sounds produced with an open airway between the larynx and the lips, at least along the midline, whereas consonants are produced with a stricture or closure somewhere along it; and (2) that vowels tend to be syllabic in languages, meaning that they embody a sonorous peak in a syllable, whereas only some kinds of consonants tend to be syllabic. There are two main physical components needed to produce a vowel: a sound source, typically a tone produced by vocal fold vibration at the larynx, and a resonator, typically the upper airways. When the tone resonates in the upper airways, it gets a specific quality of sound, perceived and interpreted as a vowel quality, for example, ‘e’ or ‘a’. Which vowel quality is produced is determined by the shape of the inner space of the throat and mouth, the vocal tract shape, created by the speaker’s configuration of the articulators, which include the lips, tongue, jaw, hard and soft palate, pharynx, and larynx. Which vowel is perceived is determined by the auditory and visual input as well as by the listener’s expectations and language experience. Diachronic and synchronic studies on vowel typology show main trends in the vowel inventories in the worlds’ languages, which can be associated with human phonetic aptitude.


Phonological Variation and Change in Italian  

Alessandro Vietti

The phonology of Italian is subject to considerable variability both at the segmental and at the prosodic level. Changes affect different features of the phonological system such as the composition of the inventory of phonemes and allophones, the phonotactic patterning of phonemes, and their lexical distribution. On the prosodic level, the variability takes the form of a composite collection of intonational patterns. In fact, the classification of intonational contours in geographical varieties appears fuzzier and less precise than the traditional division into geographical areas based on segmental features. The reasons for the high variability must be traced back, on the one hand, to the rapid and recent standardization and, on the other hand, to the prolonged contact with Romance dialects of Italy. Variation in Italian phonology can be traced back to two main dimensions: A geographic dimension, accounting for a large proportion of the total variability, and a social dimension that regulates variety-internal variation. The overall picture can be understood as a combination of vertical and horizontal sociolinguistic forces. Horizontal dynamics is responsible for the creation of a pluricentric standard, that is, a multiplicity of models of pronunciation that could be considered as geographical versions of the standard. Vertical dynamics brings about the formation of new norms at a local level and, most important, it generates a continuum of dialects ranging from the (regional) standard to the most local variety. Moving along this vertical continuum from the standard down to the local variety, there is an increasing of variability that represents a source for the emergence of social and stylistic values.


Autosegmental Phonology  

William R. Leben

Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score. The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions. An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory. One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments. The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages. Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.


Morphology and Phonotactics  

Maria Gouskova

Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins. Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.


Morpho-Phonological Processes in Korean  

Jongho Jun

It has been an ongoing issue within generative linguistics how to properly analyze morpho-phonological processes. Morpho-phonological processes typically have exceptions, but nonetheless they are often productive. Such productive, but exceptionful, processes are difficult to analyze, since grammatical rules or constraints are normally invoked in the analysis of a productive pattern, whereas exceptions undermine the validity of the rules and constraints. In addition, productivity of a morpho-phonological process may be gradient, possibly reflecting the relative frequency of the relevant pattern in the lexicon. Simple lexical listing of exceptions as suppletive forms would not be sufficient to capture such gradient productivity of a process with exceptions. It is then necessary to posit grammatical rules or constraints even for exceptionful processes as long as they are at least in part productive. Moreover, the productivity can be correctly estimated only when the domain of rule application is correctly identified. Consequently, a morpho-phonological process cannot be properly analyzed unless we possess both the correct description of its application conditions and the appropriate stochastic grammatical mechanisms to capture its productivity. The same issues arise in the analysis of morpho-phonological processes in Korean, in particular, n-insertion, sai-siot, and vowel harmony. Those morpho-phonological processes have many exceptions and variations, which make them look quite irregular and unpredictable. However, they have at least a certain degree of productivity. Moreover, the variable application of each process is still systematic in that various factors, phonological, morphosyntactic, sociolinguistic, and processing, contribute to the overall probability of rule application. Crucially, grammatical rules and constraints, which have been proposed within generative linguistics to analyze categorical and exceptionless phenomena, may form an essential part of the analysis of the morpho-phonological processes in Korean. For an optimal analysis of each of the morpho-phonological processes in Korean, the correct conditions and domains for its application need to be identified first, and its exact productivity can then be measured. Finally, the appropriate stochastic grammatical mechanisms need to be found or developed in order to capture the measured productivity.


Sign Language Phonology  

Diane Brentari, Jordan Fenlon, and Kearsy Cormier

Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.


Zero Morphemes  

Eystein Dahl and Antonio Fábregas

Zero or null morphology refers to morphological units that are devoid of phonological content. Whether such entities should be postulated is one of the most controversial issues in morphological theory, with disagreements in how the concept should be delimited, what would count as an instance of zero morphology inside a particular theory, and whether such objects should be allowed even as mere analytical instruments. With respect to the first problem, given that zero morphology is a hypothesis that comes from certain analyses, delimiting what counts as a zero morpheme is not a trivial matter. The concept must be carefully differentiated from others that intuitively also involve situations where there is no overt morphological marking: cumulative morphology, phonological deletion, etc. About the second issue, what counts as null can also depend on the specific theories where the proposal is made. In the strict sense, zero morphology involves a complete morphosyntactic representation that is associated to zero phonological content, but there are other notions of zero morphology that differ from the one discussed here, such as absolute absence of morphological expression, in addition to specific theory-internal interpretations of what counts as null. Thus, it is also important to consider the different ways in which something can be morphologically silent. Finally, with respect to the third side of the debate, arguments are made for and against zero morphology, notably from the perspectives of falsifiability, acquisition, and psycholinguistics. Of particular impact is the question of which properties a theory should have in order to block the possibility that zero morphology exists, and conversely the properties that theories that accept zero morphology associate to null morphemes. An important ingredient in this debate has to do with two empirical domains: zero derivation and paradigmatic uniformity. Ultimately, the plausibility that zero morphemes exist or not depends on the success at accounting for these two empirical patterns in a better way than theories that ban zero morphology.



Daniel Aalto, Jarmo Malinen, and Martti Vainio

Formant frequencies are the positions of the local maxima of the power spectral envelope of a sound signal. They arise from acoustic resonances of the vocal tract air column, and they provide substantial information about both consonants and vowels. In running speech, formants are crucial in signaling the movements with respect to place of articulation. Formants are normally defined as accumulations of acoustic energy estimated from the spectral envelope of a signal. However, not all such peaks can be related to resonances in the vocal tract, as they can be caused by the acoustic properties of the environment outside the vocal tract, and sometimes resonances are not seen in the spectrum. Such formants are called spurious and latent, respectively. By analogy, spectral maxima of synthesized speech are called formants, although they arise from a digital filter. Conversely, speech processing algorithms can detect formants in natural or synthetic speech by modeling its power spectral envelope using a digital filter. Such detection is most successful for male speech with a low fundamental frequency where many harmonic overtones excite each of the vocal tract resonances that lie at higher frequencies. For the same reason, reliable formant detection from females with high pitch or children’s speech is inherently difficult, and many algorithms fail to faithfully detect the formants corresponding to the lowest vocal tract resonant frequencies.


Okinawan Language  

Shinsho Miyara

Within the Ryukyuan branch of the Japonic family of languages, present-day Okinawan retains numerous regional variants which have evolved for over a thousand years in the Ryukyuan Archipelago. Okinawan is one of the six Ryukyuan languages that UNESCO identified as endangered. One of the theoretically fascinating features is that there is substantial evidence for establishing a high central phonemic vowel in Okinawan although there is currently no overt surface [ï]. Moreover, the word-initial glottal stop [ʔ] in Okinawan is more salient than that in Japanese when followed by vowels, enabling recognition that all Okinawan words are consonant-initial. Except for a few particles, all Okinawan words are composed of two or more morae. Suffixation or vowel lengthening (on nouns, verbs, and adjectives) provides the means for signifying persons as well as things related to human consumption or production. Every finite verb in Okinawan terminates with a mood element. Okinawan exhibits a complex interplay of mood or negative elements and focusing particles. Evidentiality is also realized as an obligatory verbal suffix.