Metrical structure refers to the phonological representations capturing the prominence relationships between syllables, usually manifested phonetically as differences in levels of stress. There is considerable diversity in the range of stress systems found cross-linguistically, although attested patterns represent a small subset of those that are logically possible. Stress systems may be broadly divided into two groups, based on whether they are sensitive to the internal structure, or weight, of syllables or not, with further subdivisions based on the number of stresses per word and the location of those stresses. An ongoing debate in metrical stress theory concerns the role of constituency in characterizing stress patterns. Certain approaches capture stress directly in terms of a metrical grid in which more prominent syllables are associated with a greater number of grid marks than less prominent syllables. Others assume the foot as a constituent, where theories differ in the inventory of feet they assume. Support for foot-based theories of stress comes from segmental alternations that are explicable with reference to the foot but do not readily emerge in an apodal framework. Computational tools, increasingly, are being incorporated in the evaluation of phonological theories, including metrical stress theories. Computer-generated factorial typologies provide a rigorous means for determining the fit between the empirical coverage afforded by metrical theories and the typology of attested stress systems. Computational simulations also enable assessment of the learnability of metrical representations within different theories.
Matthew K. Gordon
Phonological learnability deals with the formal properties of phonological languages and grammars, which are combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: Beginning at a predetermined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a predetermined end state, where the grammar will produce or accept all and only the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words and so on—and in many cases, the grammar and the representations must be acquired at the same time. Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, i.e., in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging? Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps? Are learning biases advantageous or even crucial to success? In assessing phonological learnability, the analysist also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned, i.e., whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.
The term consonant harmony refers to a class of systematic sound patterns, in which consonants interact in some assimilatory way even though they are not adjacent to each other in the word. Such long-distance assimilation can sometimes hold across a significant stretch of intervening vowels and consonants, such as in Samala (Ineseño Chumash) /s-am-net-in-waʃ/ → [ʃamnetiniwaʃ] “they did it to you”, where the alveolar sibilant /s‑/ of the 3.sbj prefix assimilates to the postalveolar sibilant /ʃ/ of the past suffix /‑waʃ/ across several intervening syllables that contain a variety of non-sibilant consonants. While consonant harmony most frequently involves coronal-specific contrasts, like in the Samala case, there are numerous cases of assimilation in other phonological properties, such as laryngeal features, nasality, secondary articulation, and even constriction degree. Not all cases of consonant harmony result in overt alternations, like the [s] ∼ [ʃ] alternation in the Samala 3.sbj prefix. Sometimes the harmony is merely a phonotactic restriction on the shape of morphemes (roots) within the lexicon. Consonant harmony tends to implicate only some group (natural class) of consonants that already share a number of features, and are hence relatively similar, while ignoring less similar consonants. The distance between the potentially interacting consonants can also play a role. For example, in many cases assimilation is limited to relatively short-distance ‘transvocalic’ contexts (. . . CVC. . . ), though the interpretation of such locality restrictions remains a matter of debate. Consonants that do not directly participate in the harmony (as triggers or undergoers of assimilation) are typically neutral and transparent, allowing the assimilating property to be propagated across them. However, this is not universally true; in recent years several cases have come to light in which certain segments can act as blockers when they intervene between a potential trigger-target pair. The main significance of consonant harmony for linguistic theory lies in its apparently non-local character and the challenges that this poses for theories of phonological representations and processes, as well as for formal models of phonological learning. Along with other types of long-distance dependencies in segmental phonology (e.g., long-distance dissimilation, and vowel harmony systems with one or more transparent vowels), sound patterns of consonant harmony have contributed to the development of many theoretical constructs, such as autosegmental (nonlinear) representations, feature geometry, underspecification, feature spreading, strict locality (vs. ‘gapped’ representations), parametrized visibility, agreement constraints, and surface correspondence relations. The formal analysis of long-distance assimilation (and dissimilation) remains a rich and vibrant area of theoretical research. The empirical base for such theoretical inquiry also continues to be expanded. On the one hand, previously undocumented cases (or new, surprising details of known cases) continue to be added to the corpus of attested consonant harmony patterns. On the other hand, artificial phonology learning experiments allow the properties of typologically rare or unattested patterns to be explored in a controlled laboratory setting.
Gerrit Jan Dimmendaal
Nilo-Saharan, a phylum spread mainly across an area south of the Afro-Asiatic and north of the Niger-Congo phylum, was established as a genetic grouping by Greenberg. In his earlier, continent-wide classification of African languages in articles published between 1949 and 1954, Greenberg had proposed a Macro-Sudanic family (renamed Chari-Nile in subsequent studies), consisting of a Central Sudanic and an Eastern Sudanic branch plus two isolated members, Berta and Kunama. This family formed the core of the Nilo-Saharan phylum as postulated by Greenberg in his The Languages of Africa, where a number of groups were added which had been treated as isolated units in his earlier classificatory work: Songhay, Eastern Saharan (now called Saharan), Maban and Mimi, Nyangian (now called Kuliak or Rub), Temainian (Temeinian), Coman (Koman), and Gumuz. Presenting an “encyclopaedic survey” of morphological structures for the more than 140 languages belonging to this phylum is impossible in such a brief study, also given the tremendous genetic distance between some of the major subgroups. Instead, typological variation in the morphological structure of these genetically-related languages will be central. In concrete terms this involves synchronic and diachronic observations on their formal properties (section 2), followed by an introduction to the nature of derivation, inflection, and compounding properties in Nilo-Saharan (section 3). This traditional compartmentalization has its limits because it misses out on the interaction with lexical structures and morphosyntactic properties in its extant members, as argued in section 4. As pointed out in section 5, language contact also must have played an important role in the geographical spreading of several of these typological properties.
Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-ess’ (Finnish) and arzén-nak ~ arzén-nek ‘arsenic-dat’ (Hungarian). Several questions arise. Is the variation random or in some way systematic? Where is the variation possible? Is it limited to specific lexical items? Is the choice predictable to some extent? Are the observed statistical patterns dictated by universal constraints or learned from the ambient data? The analyses illustrate the usefulness of recent advances in the technological infrastructure of linguistics, in particular the constantly improving computational tools.