1-20 of 37 Results

  • Keywords: phonology x
Clear all

Article

Phonological learnability deals with the formal properties of phonological languages and grammars, which are combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: Beginning at a predetermined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a predetermined end state, where the grammar will produce or accept all and only the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words and so on—and in many cases, the grammar and the representations must be acquired at the same time. Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, i.e., in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging? Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps? Are learning biases advantageous or even crucial to success? In assessing phonological learnability, the analysist also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned, i.e., whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.

Article

We discuss here the considerable amount of phonological variation and change in European French in the varieties spoken in France, Belgium, and Switzerland, the major francophone countries of Europe. The data discussed here derive from the perceptual and especially behavioral studies that have sought to extend the Labovian paradigm beyond Anglo-American variable linguistic phenomena to bear upon Romance. Regarding France, what emerges is a surprisingly high degree of uniformity in pronunciation, at least over the non-southern part of the country, and most Southern French varieties are also showing convergence to the Parisian norm. Pockets of resistance to this tendency are nevertheless observable. The Belgian and Swiss situations have in common the looming presence of a supralocal and indeed supranational norm playing a role often attested in other discussions of standard or legitimized languages, that of the variety representing what commonly corresponds to the nonlocal. Indeed, it may be that Belgium and Switzerland typify the local–standard relation most often reported, while the French situation, because of its relatively leveled character, is less easily described as one of standardization.

Article

Diane Brentari, Jordan Fenlon, and Kearsy Cormier

Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.

Article

A number of recent developments in phonological theory, beginning with The Sound Pattern of English, are particularly relevant to the phonology of compounds. They address both the phonological phenomena that apply to compound words and the phonological structures that are required as the domains of these phenomena: segmental and nonsegmental phenomena that operate within each member of a compound separately, as well as at the juncture between the members of compounds and throughout compounds as a whole. In all cases, what is crucial for the operation of the phonological phenomena of compounds is phonological structure, in terms of constituents of the Prosodic Hierarchy, as opposed to morphosyntactic structure. Specifically, only two phonological constituents are required, the Phonological Word, which provides the domain for phenomena that apply to the individual members of compounds and at their junctures, and a larger constituent that groups the members of compounds together. The nature of the latter is somewhat controversial, the main issue being whether or not there is a constituent in the Prosodic Hierarchy between the Phonological Word and the Phonological Phrase. When present, this constituent, the Composite Group (revised from the original Clitic Group), includes the members of compounds, as well as “stray” elements such as clitics and “Level 2” affixes. In its absence, compounds, and often the same “stray” elements, are analyzed as a type of Recursive Phonological Word, although crucially, the combinations of such element do not exhibit the same properties as the basic Phonological Word.

Article

Gothic  

D. Gary Miller

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article. Apart from runic inscriptions, Gothic is the earliest attested language of the Germanic family, dating to the 4th century. Along with Crimean Gothic, it belongs to the branch known as East Germanic. The bulk of the extant Gothic corpus is a translation of the Bible, of which only a portion remains. The translation is traditionally ascribed to Wulfila, who is credited with inventing the Gothic alphabet. The many Greek conventions both help and hinder interpretation of the Gothic phonological system. As in Greek, letters of the alphabet functioned as numerals, but the late letter names were from runic. Gothic inflectional categories include nouns, adjectives, and verbs. Nouns are inflected for three genders, two numbers, and four cases. Various stem types inherited from Indo-European constitute different form classes in Gothic. Adjectives have the same properties and are also inflected according to so-called weak and strong forms, as are Gothic verbs. Verbs are inflected for three persons and numbers, an indicative and a nonindicative mood (here called “optative”), past and nonpast tense, and voice. The mediopassive survives in Gothic morphologically as a synthetic passive and syntactically in innovated periphrastic formations; middle and anticausative functions were taken over by reflexive-type structures. Nonfinite forms are the infinitive, the imperative, and two participles. In syntax, Gothic had null subjects as an option, mostly in the third person singular. Aspect was effected primarily by prefixes, which have many other functions, and aspect is not consistently indicated. Absolute constructions with a participle occurred in various cases with functional differences. Relativization was effected primarily by relative pronouns built on demonstratives plus a complementizer. Complementizers could be used with subordinate clause verbs in the indicative or optative. The switch to the optative was triggered by irrealis, matrix verbs that do not permit a full range of subordinate tenses, expression of a hope or wish, potentiality, and several other conditions. Many of these are also relevant to matrix clauses (independent optatives). Essentials of linearization include prepositional phrases, default postposed genitives and possessive adjectives, and preposed demonstratives. Verb-object order predominates, but there is much Greek influence. Verb-auxiliary order is native Gothic.

Article

Child phonological templates are idiosyncratic word production patterns. They can be understood as deriving, through generalization of patterning, from the very first words of the child, which are typically close in form to their adult targets. Templates can generally be identified only some time after a child’s first 20–50 words have been produced but before the child has achieved an expressive lexicon of 200 words. The templates appear to serve as a kind of ‘holding strategy’, a way for children to produce more complex adult word forms while remaining within the limits imposed by the articulatory, planning, and memory limitations of the early word period. Templates have been identified in the early words of children acquiring a number of languages, although not all children give clear evidence of using them. Within a given language we see a range of different templatic patterns, but these are nevertheless broadly shaped by the prosodic characteristics of the adult language as well as by the idiosyncratic production preferences of a given child; it is thus possible to begin to outline a typology of child templates. However, the evidence base for most languages remains small, ranging from individual diary studies to rare longitudinal studies of as many as 30 children. Thus templates undeniably play a role in phonological development, but their extent of use or generality remains unclear, their timing for the children who show them is unpredictable, and their period of sway is typically brief—a matter of a few weeks or months at most. Finally, the formal status and relationship of child phonological templates to adult grammars has so far received relatively little attention, but the closest parallels may lie in active novel word formation and in the lexicalization of commonly occurring expressions, both of which draw, like child templates, on the mnemonic effects of repetition.

Article

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Yvan Rose

Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.

Article

Steven Moran

A phonological inventory is a repertoire of contrastive articulatory or manual gestures shared by a community of users. Whether spoken or signed, all human languages have a phonological inventory. In spoken languages, the phonological inventory is comprised of a set of segments (consonants and vowels) and suprasegmentals (stress and intonation) that are linguistically contrastive, either lexically or grammatically, in a particular language or one of its dialects. Sign languages also have phonological inventories, which include a set of linguistically contrastive signs made from movement, hand shape, and location. The study of phonological inventories is interesting because they tell us about the distribution, frequency, and diversity of gestures that individuals acquire and produce in the world’s 7,000 or so languages. Their study has also raised important empirical questions about the comparability of linguistic concepts across different languages and modalities, in the use of statistics and sampling in quantitative approaches to comparative linguistics, and in the study of language ontogeny and phylogeny over the course of language evolution. As such, some recent research highlights include the following: quantitative approaches suggest causal relationships between phonological inventory composition and gene-culture and language-environment coevolution; the study of de novo sign languages provides important insights into the emergence of phonology; and comparative animal communication studies suggest evolutionary speech precursors in phonological repertoires of nonhuman primates, and potentially in extinct hominids including Neanderthal.

Article

Paul de Lacy

Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain. The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production. There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries. Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them. Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them. The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.

Article

William R. Leben

Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score. The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions. An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory. One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments. The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages. Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.

Article

Marie K. Huffman

Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.

Article

The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.

Article

Reduplication is a word-formation process in which all or part of a word is repeated to convey some form of meaning. A wide range of patterns are found in terms of both the form and meaning expressed by reduplication, making it one of the most studied phenomenon in phonology and morphology. Because the form always varies, depending on the base to which it is attached, it raises many issues such as the nature of the repetition mechanism, how to represent reduplicative morphemes, and whether or not a unified approach can be proposed to account for the full range of patterns.

Article

Brazilian Portuguese is the native language of more than 200 million people living in Brazil. Spoken in South America since around the year 1500, Brazilian Portuguese has peculiar phonological traits, many of them variable. The extensive language contact that has taken place in Brazil caused Brazilian Portuguese to break up into regional dialects. Various phonological processes affect Brazilian Portuguese at the segmental and suprasegmental levels. Some of the processes target consonants, such as the regressive palatalization of /t, d/, the fricatization of /r/ in syllabic onset; some processes target vowels, such as the raising and lowering of unstressed /e, o/ vowels; others target the intonation of utterances, such as the rising of the nuclear stress of yes–no questions. The results of several empirical studies on varieties of Brazilian Portuguese show that not all of the processes correspond to change in progress in Brazilian Portuguese; some of them are stable variables. They also show that not every variable is present in all dialects and that some variables are socially salient and stigmatized. Compared to present European Portuguese, the phonology of Brazilian Portuguese seems to be conservative in some aspects, such as in the raising of vowels in unstressed, word-final syllables; innovative in others, such as in the vocalization of /l/ in syllabic coda.

Article

Martin Maiden

Dalmatian is an extinct group of Romance varieties spoken on the eastern Adriatic seaboard, best known from its Vegliote variety, spoken on the island of Krk (also called Veglia). Vegliote is principally represented by the linguistic testimony of its last speaker, Tuone Udaina, who died at the end of the 19th century. By the time Udaina’s Vegliote could be explored by linguists (principally by Matteo Bartoli), it seems that he had no longer actively spoken the language for decades, and his linguistic testimony is imperfect, in that it is influenced for example by the Venetan dialect that he habitually spoke. Nonetheless, his Vegliote reveals various distinctive and recurrent linguistic traits, notably in the domain of phonology (for example, pervasive and complex patterns of vowel diphthongization) and morphology (notably a general collapse of the general Romance inflexional system of tense and mood morphology, but also an unusual type of synthetic future form).

Article

Geoffrey K. Pullum

English is both the most studied of the world’s languages and the most widely used. It comes closer than any other language to functioning as a world communication medium and is very widely used for governmental purposes. This situation is the result of a number of historical accidents of different magnitudes. The linguistic properties of the language itself would not have motivated its choice (contra the talk of prescriptive usage writers who stress the clarity and logic that they believe English to have). Divided into multiple dialects, English has a phonological system involving remarkably complex consonant clusters and a large inventory of distinct vowel nuclei; a bad, confusing, and hard-to-learn alphabetic orthography riddled with exceptions, ambiguities, and failures of the spelling to correspond to the pronunciation; a morphology that is rather more complex than is generally appreciated, with seven or eight paradigm patterns and a couple of hundred irregular verbs; a large multilayered lexicon containing roots of several quite distinct historical sources; and a syntax that despite its very widespread SVO (Subject-Verb-Object) basic order in the clause is replete with tricky details. For example, there are crucial restrictions on government of prepositions, many verb-preposition idioms, subtle constraints on the intransitive prepositions known as “particles,” an important distinction between two (or under a better analysis, three) classes of verb that actually have different syntax, and a host of restrictions on the use of its crucial “wh-words.” It is only geopolitical and historical accidents that have given English its enormous importance and prestige in the world, not its inherent suitability for its role.

Article

Alexis Michaud and Bonny Sands

Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.

Article

Jacques Durand

Corpus Phonology is an approach to phonology that places corpora at the center of phonological research. Some practitioners of corpus phonology see corpora as the only object of investigation; others use corpora alongside other available techniques (for instance, intuitions, psycholinguistic and neurolinguistic experimentation, laboratory phonology, the study of the acquisition of phonology or of language pathology, etc.). Whatever version of corpus phonology one advocates, corpora have become part and parcel of the modern research environment, and their construction and exploitation has been modified by the multidisciplinary advances made within various fields. Indeed, for the study of spoken usage, the term ‘corpus’ should nowadays only be applied to bodies of data meeting certain technical requirements, even if corpora of spoken usage are by no means new and coincide with the birth of recording techniques. It is therefore essential to understand what criteria must be met by a modern corpus (quality of recordings, diversity of speech situations, ethical guidelines, time-alignment with transcriptions and annotations, etc.) and what tools are available to researchers. Once these requirements are met, the way is open to varying and possibly conflicting uses of spoken corpora by phonological practitioners. A traditional stance in theoretical phonology sees the data as a degenerate version of a more abstract underlying system, but more and more researchers within various frameworks (e.g., usage-based approaches, exemplar models, stochastic Optimality Theory, sociophonetics) are constructing models that tightly bind phonological competence to language use, rely heavily on quantitative information, and attempt to account for intra-speaker and inter-speaker variation. This renders corpora essential to phonological research and not a mere adjunct to the phonological description of the languages of the world.

Article

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.