1-20 of 32 Results

  • Keywords: phonology x
Clear all

Article

Phonological learnability deals with the formal properties of phonological languages and grammars, which are combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: Beginning at a predetermined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a predetermined end state, where the grammar will produce or accept all and only the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words and so on—and in many cases, the grammar and the representations must be acquired at the same time. Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, i.e., in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging? Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps? Are learning biases advantageous or even crucial to success? In assessing phonological learnability, the analysist also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned, i.e., whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.

Article

Diane Brentari, Jordan Fenlon, and Kearsy Cormier

Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.

Article

Child phonological templates are idiosyncratic word production patterns. They can be understood as deriving, through generalization of patterning, from the very first words of the child, which are typically close in form to their adult targets. Templates can generally be identified only some time after a child’s first 20–50 words have been produced but before the child has achieved an expressive lexicon of 200 words. The templates appear to serve as a kind of ‘holding strategy’, a way for children to produce more complex adult word forms while remaining within the limits imposed by the articulatory, planning, and memory limitations of the early word period. Templates have been identified in the early words of children acquiring a number of languages, although not all children give clear evidence of using them. Within a given language we see a range of different templatic patterns, but these are nevertheless broadly shaped by the prosodic characteristics of the adult language as well as by the idiosyncratic production preferences of a given child; it is thus possible to begin to outline a typology of child templates. However, the evidence base for most languages remains small, ranging from individual diary studies to rare longitudinal studies of as many as 30 children. Thus templates undeniably play a role in phonological development, but their extent of use or generality remains unclear, their timing for the children who show them is unpredictable, and their period of sway is typically brief—a matter of a few weeks or months at most. Finally, the formal status and relationship of child phonological templates to adult grammars has so far received relatively little attention, but the closest parallels may lie in active novel word formation and in the lexicalization of commonly occurring expressions, both of which draw, like child templates, on the mnemonic effects of repetition.

Article

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Reduplication is a word-formation process in which all or part of a word is repeated to convey some form of meaning. A wide range of patterns are found in terms of both the form and meaning expressed by reduplication, making it one of the most studied phenomenon in phonology and morphology. Because the form always varies, depending on the base to which it is attached, it raises many issues such as the nature of the repetition mechanism, how to represent reduplicative morphemes, and whether or not a unified approach can be proposed to account for the full range of patterns.

Article

Yvan Rose

Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.

Article

Steven Moran

A phonological inventory is a repertoire of contrastive articulatory or manual gestures shared by a community of users. Whether spoken or signed, all human languages have a phonological inventory. In spoken languages, the phonological inventory is comprised of a set of segments (consonants and vowels) and suprasegmentals (stress and intonation) that are linguistically contrastive, either lexically or grammatically, in a particular language or one of its dialects. Sign languages also have phonological inventories, which include a set of linguistically contrastive signs made from movement, hand shape, and location. The study of phonological inventories is interesting because they tell us about the distribution, frequency, and diversity of gestures that individuals acquire and produce in the world’s 7,000 or so languages. Their study has also raised important empirical questions about the comparability of linguistic concepts across different languages and modalities, in the use of statistics and sampling in quantitative approaches to comparative linguistics, and in the study of language ontogeny and phylogeny over the course of language evolution. As such, some recent research highlights include the following: quantitative approaches suggest causal relationships between phonological inventory composition and gene-culture and language-environment coevolution; the study of de novo sign languages provides important insights into the emergence of phonology; and comparative animal communication studies suggest evolutionary speech precursors in phonological repertoires of nonhuman primates, and potentially in extinct hominids including Neanderthal.

Article

Paul de Lacy

Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain. The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production. There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries. Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them. Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them. The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.

Article

William R. Leben

Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score. The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions. An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory. One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments. The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages. Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.

Article

Marie K. Huffman

Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.

Article

The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.

Article

Martin Maiden

Dalmatian is an extinct group of Romance varieties spoken on the eastern Adriatic seaboard, best known from its Vegliote variety, spoken on the island of Krk (also called Veglia). Vegliote is principally represented by the linguistic testimony of its last speaker, Tuone Udaina, who died at the end of the 19th century. By the time Udaina’s Vegliote could be explored by linguists (principally by Matteo Bartoli), it seems that he had no longer actively spoken the language for decades, and his linguistic testimony is imperfect, in that it is influenced for example by the Venetan dialect that he habitually spoke. Nonetheless, his Vegliote reveals various distinctive and recurrent linguistic traits, notably in the domain of phonology (for example, pervasive and complex patterns of vowel diphthongization) and morphology (notably a general collapse of the general Romance inflexional system of tense and mood morphology, but also an unusual type of synthetic future form).

Article

Geoffrey K. Pullum

English is both the most studied of the world’s languages and the most widely used. It comes closer than any other language to functioning as a world communication medium and is very widely used for governmental purposes. This situation is the result of a number of historical accidents of different magnitudes. The linguistic properties of the language itself would not have motivated its choice (contra the talk of prescriptive usage writers who stress the clarity and logic that they believe English to have). Divided into multiple dialects, English has a phonological system involving remarkably complex consonant clusters and a large inventory of distinct vowel nuclei; a bad, confusing, and hard-to-learn alphabetic orthography riddled with exceptions, ambiguities, and failures of the spelling to correspond to the pronunciation; a morphology that is rather more complex than is generally appreciated, with seven or eight paradigm patterns and a couple of hundred irregular verbs; a large multilayered lexicon containing roots of several quite distinct historical sources; and a syntax that despite its very widespread SVO (Subject-Verb-Object) basic order in the clause is replete with tricky details. For example, there are crucial restrictions on government of prepositions, many verb-preposition idioms, subtle constraints on the intransitive prepositions known as “particles,” an important distinction between two (or under a better analysis, three) classes of verb that actually have different syntax, and a host of restrictions on the use of its crucial “wh-words.” It is only geopolitical and historical accidents that have given English its enormous importance and prestige in the world, not its inherent suitability for its role.

Article

Alexis Michaud and Bonny Sands

Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.

Article

Jacques Durand

Corpus Phonology is an approach to phonology that places corpora at the center of phonological research. Some practitioners of corpus phonology see corpora as the only object of investigation; others use corpora alongside other available techniques (for instance, intuitions, psycholinguistic and neurolinguistic experimentation, laboratory phonology, the study of the acquisition of phonology or of language pathology, etc.). Whatever version of corpus phonology one advocates, corpora have become part and parcel of the modern research environment, and their construction and exploitation has been modified by the multidisciplinary advances made within various fields. Indeed, for the study of spoken usage, the term ‘corpus’ should nowadays only be applied to bodies of data meeting certain technical requirements, even if corpora of spoken usage are by no means new and coincide with the birth of recording techniques. It is therefore essential to understand what criteria must be met by a modern corpus (quality of recordings, diversity of speech situations, ethical guidelines, time-alignment with transcriptions and annotations, etc.) and what tools are available to researchers. Once these requirements are met, the way is open to varying and possibly conflicting uses of spoken corpora by phonological practitioners. A traditional stance in theoretical phonology sees the data as a degenerate version of a more abstract underlying system, but more and more researchers within various frameworks (e.g., usage-based approaches, exemplar models, stochastic Optimality Theory, sociophonetics) are constructing models that tightly bind phonological competence to language use, rely heavily on quantitative information, and attempt to account for intra-speaker and inter-speaker variation. This renders corpora essential to phonological research and not a mere adjunct to the phonological description of the languages of the world.

Article

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.

Article

Ans van Kemenade

The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.

Article

Chiyuki Ito and Michael J. Kenstowicz

Typologically, pitch-accent languages stand between stress languages like Spanish and tone languages like Shona, and share properties of both. In a stress language, typically just one syllable per word is accented and bears the major stress (cf. Spanish sábana ‘sheet,’ sabána ‘plain,’ panamá ‘Panama’). In a tone language, the number of distinctions grows geometrically with the size of the word. So in Shona, which contrasts high versus low tone, trisyllabic words have eight possible pitch patterns. In a canonical pitch-accent language such as Japanese, just one syllable (or mora) per word is singled out as distinctive, as in Spanish. Each syllable in the word is assigned a high or low tone (as in Shona); however, this assignment is predictable based on the location of the accented syllable. The Korean dialects spoken in the southeast Kyengsang and northeast Hamkyeng regions retain the pitch-accent distinctions that developed by the period of Middle Korean (15th–16th centuries). For example, in Hamkyeng a three-syllable word can have one of four possible pitch patterns, which are assigned by rules that refer to the accented syllable. The accented syllable has a high tone, and following syllables have low tones. Then the high tone of the accented syllable spreads up to the initial syllable, which is low. Thus, /MUcike/ ‘rainbow’ is realized as high-low-low, /aCImi/ ‘aunt’ is realized as low-high-low, and /menaRI/ ‘parsley’ is realized as low-high-high. An atonic word such as /cintallɛ/ ‘azalea’ has the same low-high-high pitch pattern as ‘parsley’ when realized alone. But the two types are distinguished when combined with a particle such as /MAN/ ‘only’ that bears an underlying accent: /menaRI+MAN/ ‘only parsely’ is realized as low-high-high-low while /cintallɛ+MAN/ ‘only azelea’ is realized as low-high-high-high. This difference can be explained by saying that the underlying accent on the particle is deleted if the stem bears an accent. The result is that only one syllable per word may bear an accent (similar to Spanish). On the other hand, since the accent is realized with pitch distinctions, tonal assimilation rules are prevalent in pitch-accent languages. This article begins with a description of the Middle Korean pitch-accent system and its evolution into the modern dialects, with a focus on Kyengsang. Alternative synchronic analyses of the accentual alternations that arise when a stem is combined with inflectional particles are then considered. The discussion proceeds to the phonetic realization of the contrasting accents, their realizations in compounds and phrases, and the adaptation of loanwords. The final sections treat the lexical restructuring and variable distribution of the pitch accents and their emergence from predictable word-final accent in an earlier stage of Proto-Korean.

Article

James Myers

Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.

Article

Maria Gouskova

Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins. Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.