The subject of this article is vowel harmony. In its prototypical form, this phenomenon involves agreement between all vowels in a word for some phonological property (such as palatality, labiality, height or tongue root position). This agreement is then evidenced by agreement patterns within morphemes and by alternations in vowels when morphemes are combined into complex words, thus creating allomorphic alternations. Agreement involves one or more harmonic features for which vowels form harmonic pairs, such that each vowel has a harmonic counterpart in the other set. I will focus on vowels that fail to alternate, that are thus neutral (either inherently or in a specific context), and that will be either opaque or transparent to the process. We will compare approaches that use underspecification of binary features and approaches that use unary features. For vowel harmony, vowels are either triggers or targets, and for each, specific conditions may apply. Vowel harmony can be bidirectional or unidirectional and can display either a root control pattern or a dominant/recessive pattern.
Harry van der Hulst
It has been an ongoing issue within generative linguistics how to properly analyze morpho-phonological processes. Morpho-phonological processes typically have exceptions, but nonetheless they are often productive. Such productive, but exceptionful, processes are difficult to analyze, since grammatical rules or constraints are normally invoked in the analysis of a productive pattern, whereas exceptions undermine the validity of the rules and constraints. In addition, productivity of a morpho-phonological process may be gradient, possibly reflecting the relative frequency of the relevant pattern in the lexicon. Simple lexical listing of exceptions as suppletive forms would not be sufficient to capture such gradient productivity of a process with exceptions. It is then necessary to posit grammatical rules or constraints even for exceptionful processes as long as they are at least in part productive. Moreover, the productivity can be correctly estimated only when the domain of rule application is correctly identified. Consequently, a morpho-phonological process cannot be properly analyzed unless we possess both the correct description of its application conditions and the appropriate stochastic grammatical mechanisms to capture its productivity. The same issues arise in the analysis of morpho-phonological processes in Korean, in particular, n-insertion, sai-siot, and vowel harmony. Those morpho-phonological processes have many exceptions and variations, which make them look quite irregular and unpredictable. However, they have at least a certain degree of productivity. Moreover, the variable application of each process is still systematic in that various factors, phonological, morphosyntactic, sociolinguistic, and processing, contribute to the overall probability of rule application. Crucially, grammatical rules and constraints, which have been proposed within generative linguistics to analyze categorical and exceptionless phenomena, may form an essential part of the analysis of the morpho-phonological processes in Korean. For an optimal analysis of each of the morpho-phonological processes in Korean, the correct conditions and domains for its application need to be identified first, and its exact productivity can then be measured. Finally, the appropriate stochastic grammatical mechanisms need to be found or developed in order to capture the measured productivity.
Daniel Currie Hall
The fundamental idea underlying the use of distinctive features in phonology is the proposition that the same phonetic properties that distinguish one phoneme from another also play a crucial role in accounting for phonological patterns. Phonological rules and constraints apply to natural classes of segments, expressed in terms of features, and involve mechanisms, such as spreading or agreement, that copy distinctive features from one segment to another. Contrastive specification builds on this by taking seriously the idea that phonological features are distinctive features. Many phonological patterns appear to be sensitive only to properties that crucially distinguish one phoneme from another, ignoring the same properties when they are redundant or predictable. For example, processes of voicing assimilation in many languages apply only to the class of obstruents, where voicing distinguishes phonemic pairs such as /t/ and /d/, and ignore sonorant consonants and vowels, which are predictably voiced. In theories of contrastive specification, features that do not serve to mark phonemic contrasts (such as [+voice] on sonorants) are omitted from underlying representations. Their phonological inertness thus follows straightforwardly from the fact that they are not present in the phonological system at the point at which the pattern applies, though the redundant features may subsequently be filled in either before or during phonetic implementation. In order to implement a theory of contrastive specification, it is necessary to have a means of determining which features are contrastive (and should thus be specified) and which ones are redundant (and should thus be omitted). A traditional and intuitive method involves looking for minimal pairs of phonemes: if [±voice] is the only property that can distinguish /t/ from /d/, then it must be specified on them. This approach, however, often identifies too few contrastive features to distinguish the phonemes of an inventory, particularly when the phonetic space is sparsely populated. For example, in the common three-vowel inventory /i a u/, there is more than one property that could distinguish any two vowels: /i/ differs from /a/ in both place (front versus back or central) and height (high versus low), /a/ from /u/ in both height and rounding, and /u/ from /i/ in both rounding and place. Because pairwise comparison cannot identify any features as contrastive in such cases, much recent work in contrastive specification is instead based on a hierarchical sequencing of features, with specifications assigned by dividing the full inventory into successively smaller subsets. For example, if the inventory /i a u/ is first divided according to height, then /a/ is fully distinguished from the other two vowels by virtue of being low, and the second feature, either place or rounding, is contrastive only on the high vowels. Unlike pairwise comparison, this approach produces specifications that fully distinguish the members of the underlying inventory, while at the same time allowing for the possibility of cross-linguistic variation in the specifications assigned to similar inventories.
Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-ess’ (Finnish) and arzén-nak ~ arzén-nek ‘arsenic-dat’ (Hungarian). Several questions arise. Is the variation random or in some way systematic? Where is the variation possible? Is it limited to specific lexical items? Is the choice predictable to some extent? Are the observed statistical patterns dictated by universal constraints or learned from the ambient data? The analyses illustrate the usefulness of recent advances in the technological infrastructure of linguistics, in particular the constantly improving computational tools.