The word accent system of Tokyo Japanese might look quite complex with a number of accent patterns and rules. However, recent research has shown that it is not as complex as has been assumed if one incorporates the notion of markedness into the analysis: nouns have only two productive accent patterns, the antepenultimate and the unaccented pattern, and different accent rules can be generalized if one focuses on these two productive accent patterns.
The word accent system raises some new interesting issues. One of them concerns the fact that a majority of nouns are ‘unaccented,’ that is, they are pronounced with a rather flat pitch pattern, apparently violating the principle of obligatoriness. A careful analysis of noun accentuation reveals that this strange accent pattern occurs in some linguistically predictable structures. In morphologically simplex nouns, it typically tends to emerge in four-mora nouns ending in a sequence of light syllables. In compound nouns, on the other hand, it emerges due to multiple factors, such as compound-final deaccenting morphemes, deaccenting pseudo-morphemes, and some types of prosodic configurations.
Japanese pitch accent exhibits an interesting aspect in its interactions with other phonological and linguistic structures. For example, the accent of compound nouns is closely related with rendaku, or sequential voicing; the choice between the accented and unaccented patterns in certain types of compound nouns correlates with the presence or absence of the sequential voicing. Moreover, whether the compound accent rule applies to a certain compound depends on its internal morphosyntactic configuration as well as its meaning; alternatively, the compound accent rule is blocked in certain types of morphosyntactic and semantic structures.
Finally, careful analysis of word accent sheds new light on the syllable structure of the language, notably on two interrelated questions about diphthong-hood and super-heavy syllables. It provides crucial insight into ‘diphthongs,’ or the question of which vowel sequence constitutes a diphthong, against a vowel sequence across a syllable boundary. It also presents new evidence against trimoraic syllables in the language.
Article
Accent in Japanese Phonology
Haruo Kubozono
Article
Acceptability Judgments
James Myers
Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.
Article
Acoustic Theories of Speech Perception
Melissa Redford and Melissa Baese-Berk
Acoustic theories assume that speech perception begins with an acoustic signal transformed by auditory processing. In classical acoustic theory, this assumption entails perceptual primitives that are akin to those identified in the spectral analyses of speech. The research objective is to link these primitives with phonological units of traditional descriptive linguistics via sound categories and then to understand how these units/categories are bound together in time to recognize words. Achieving this objective is challenging because the signal is replete with variation, making the mapping of signal to sound category nontrivial. Research that grapples with the mapping problem has led to many basic findings about speech perception, including the importance of cue redundancy to category identification and of differential cue weighting to category formation. Research that grapples with the related problem of binding categories into words for speech processing motivates current neuropsychological work on speech perception. The central focus on the mapping problem in classical theory has also led to an alternative type of acoustic theory, namely, exemplar-based theory. According to this type of acoustic theory, variability is critical for processing talker-specific information during speech processing. The problems associated with mapping acoustic cues to sound categories is not addressed because exemplar-based theories assume that perceptual traces of whole words are perceptual primitives. Smaller units of speech sound representation, as well as the phonology as a whole, are emergent from the word-based representations. Yet, like classical acoustic theories, exemplar-based theories assume that production is mediated by a phonology that has no inherent motor information. The presumed disconnect between acoustic and motor information during perceptual processing distinguishes acoustic theories as a class from other theories of speech perception.
Article
The Acquisition of Clitics in the Romance Languages
Anna Gavarró
The Romance languages are characterized by the existence of pronominal clitics. Third person pronominal clitics are often, but not always, homophonous with the definite determiner series in the same language. Both pronominal and determiner clitics emerge early in child acquisition, but their path of development varies depending on clitic type and language. While determiner clitic acquisition is quite homogeneous across Romance, there is wide cross-linguistic variation for pronominal clitics (accusative vs. partitive vs. dative, first/second person vs. third person); the observed differences in acquisition correlate with syntactic differences between the pronouns. Acquisition of pronominal clitics is also affected if a language has both null objects and object clitics, as in European Portuguese. The interpretation of Romance pronominal clitics is generally target-like in child grammar, with absence of Pronoun Interpretation problems like those found in languages with strong pronouns. Studies on developmental language impairment show that, as in typical development, clitic production is subject to cross-linguistic variation. The divergent performance between determiners and pronominals in this population points to the syntactic (as opposed to phonological) nature of the deficit.
Article
Acquisition of L1 Phonology in the Romance Languages
Yvan Rose, Laetitia Almeida, and Maria João Freitas
The field of study on the acquisition of phonological productive abilities by first-language learners in the Romance languages has been largely focused on three main languages: French, Portuguese, and Spanish, including various dialects of these languages spoken in Europe as well as in the Americas. In this article, we provide a comparative survey of this literature, with an emphasis on representational phonology. We also include in our discussion observations from the development of Catalan and Italian, and mention areas where these languages, as well as Romanian, another major Romance language, would provide welcome additions to our cross-linguistic comparisons. Together, the various studies we summarize reveal intricate patterns of development, in particular concerning the acquisition of consonants across different positions within the syllable, the word, and in relation to stress, documented from both monolingual and bilingual first-language learners can be found. The patterns observed across the different languages and dialects can generally be traced to formal properties of phone distributions, as entailed by mainstream theories of phonological representation, with variations also predicted by more functional aspects of speech, including phonetic factors and usage frequency. These results call for further empirical studies of phonological development, in particular concerning Romanian, in addition to Catalan and Italian, whose phonological and phonetic properties offer compelling grounds for the formulation and testing of models of phonology and phonological development.
Article
Arthur Abramson
Philip Rubin
Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis.
Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.
Article
Articulatory Phonetics
Marie K. Huffman
Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.
Article
Articulatory Phonology
Marianne Pouplier
One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.
Article
Autosegmental Phonology
William R. Leben
Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score.
The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions.
An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory.
One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments.
The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages.
Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.
Article
Balkan-Romance
Adina Dragomirescu
Balkan-Romance is represented by Romanian and its historical dialects: Daco-Romanian (broadly known as Romanian), Aromanian, Megleno-Romanian, and Istro-Romanian (see article “Morphological and Syntactic Variation and Change in Romanian” in this encyclopedia). The external history of these varieties is often unclear, given the historical events that took place in the Lower Danubian region: the conquest of this territory by the Roman Empire for a short period and the successive Slavic invasions. Moreover, the earliest preserved writing in Romanian only dates from the 16th century. Between the Roman presence in the Balkans and the first attested text, there is a gap of more than 1,000 years, a period in which Romanian emerged, the dialectal separation took place, and the Slavic influence had effects especially on the lexis of Romanian.
In the 16th century, in the earliest old Romanian texts, the language already displayed the main features of modern Romanian: the vowels /ə/ and /ɨ/; the nominative-accusative versus genitive-dative case distinction; analytical case markers, such as the genitive marker al; the functional prepositions a and la; the proclitic genitive-dative marker lui; the suffixal definite article; polydefinite structures; possessive affixes; rich verbal inflection, with both analytic and synthetic forms and with three auxiliaries (‘have’, ‘be’, and ‘want’); the supine, not completely verbalized at the time; two types of infinitives, with the ‘short’ one on a path toward becoming verbal and the ‘long’ one specializing as a noun; null subjects; nonfinite verb forms with lexical subjects; the mechanism for differential object marking and clitic doubling with slightly more vacillating rules than in the present-day language; two types of passives; strict negative concord; the SVO and VSO word orders; adjectives placed mainly in the postnominal position; a rich system of pronominal clitics; prepositions requiring the accusative and the genitive; and a large inventory of subordinating conjunctions introducing complement clauses.
Most of these features are also attested in the trans-Danubian varieties (Aromanian, Megleno-Romanian, and Istro-Romanian), which were also strongly influenced by the various languages they have entered in direct contact with: Greek, Albanian, Macedonian, Croatian, and so forth. These source languages have had a major influence in the vocabulary of the trans-Danubian varieties and certain consequences in the shape of their grammatical system. The differences between Daco-Romanian and the trans-Danubian varieties have also resulted from the preservation of archaic features in the latter or from innovations that took place only there.
Article
Blending in Morphology
Natalia Beliaeva
Blending is a type of word formation in which two or more words are merged into one so that the blended constituents are either clipped, or partially overlap. An example of a typical blend is brunch, in which the beginning of the word breakfast is joined with the ending of the word lunch. In many cases such as motel (motor + hotel) or blizzaster (blizzard + disaster) the constituents of a blend overlap at segments that are phonologically or graphically identical. In some blends, both constituents retain their form as a result of overlap, for example, stoption (stop + option). These examples illustrate only a handful of the variety of forms blends may take; more exotic examples include formations like Thankshallowistmas (Thanksgiving + Halloween + Christmas). The visual and audial amalgamation in blends is reflected on the semantic level. It is common to form blends meaning a combination or a product of two objects or phenomena, such as an animal breed (e.g., zorse, a breed of zebra and horse), an interlanguage variety (e.g., franglais, which is a French blend of français and anglais meaning a mixture of French and English languages), or other type of mix (e.g., a shress is a type of clothes having features of both a shirt and a dress).
Blending as a word formation process can be regarded as a subtype of compounding because, like compounds, blends are formed of two (or sometimes more) content words and semantically either are hyponyms of one of their constituents, or exhibit some kind of paradigmatic relationships between the constituents. In contrast to compounds, however, the formation of blends is restricted by a number of phonological constraints given that the resulting formation is a single word. In particular, blends tend to be of the same length as the longest of their constituent words, and to preserve the main stress of one of their constituents. Certain regularities are also observed in terms of ordering of the words in a blend (e.g., shorter first, more frequent first), and in the position of the switch point, that is, where one blended word is cut off and switched to another (typically at the syllable boundary or at the onset/rime boundary). The regularities of blend formation can be related to the recognizability of the blended words.
Article
Bracketing Paradoxes in Morphology
Heather Newell
Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.
Article
Catalan
Francisco Ordóñez
Catalan is a “medium-sized” Romance language spoken by over 10 million speakers, spread over four nation states: Northeastern Spain, Andorra, Southern France, and the city of L’Alguer (Alghero) in Sardinia, Italy. Catalan is divided into two primary dialectal divisions, each with further subvarieties: Western Catalan (Western Catalonia, Eastern Aragon, and Valencian Community) and Eastern Catalan (center and east of Catalonia, Balearic Islands, Rosselló, and l’Alguer).
Catalan descends from Vulgar Latin. Catalan expanded during medieval times as one of the primary vernacular languages of the Kingdom of Aragon. It largely retained its role in government and society until the War of Spanish Succession in 1714, and since it has been minoritized. Catalan was finally standardized during the beginning of the 20th century, although later during the Franco dictatorship it was banned in public spaces. The situation changed with the new Spanish Constitution promulgated in 1978, when Catalan was declared co-official with Spanish in Catalonia, the Valencian Community, and the Balearic Islands.
The Latin vowel system evolved in Catalan into a system of seven stressed vowels. As in most other Iberian Romance languages, there is a general process of spirantization or lenition of voiced stops. Catalan has a two-gender grammatical system and, as in other Western Romance languages, plurals end in -s; Catalan has a personal article and Balearic Catalan has a two-determiner system for common nouns. Finally, past perfective actions are indicated by a compound tense consisting of the auxiliary verb anar ‘to go’ in present tense plus the infinitive.
Catalan is a minoritized language everywhere it is spoken, except in the microstate of Andorra, and it is endangered in France and l’Alguer. The revival of Catalan in the post-dictatorship era is connected with a movement called linguistic normalization. The idea of normalization refers to the aim to return Catalan to a “normal” use at an official level and everyday level as any official language.
Article
Central Italo-Romance (Including Standard Italian)
Elisa De Roberto
Central Italo-Romance includes Standard Italian and the Tuscan dialects, the dialects of the mediana and perimediana areas, as well as Corsican. This macro-area reaches as far north as the Carrara–Senigallia line and as far south as the line running from Circeo in Lazio to the mouth of the Aso river in Le Marche, cutting through Ceprano, Sora, Avezzano, L’Aquila and Accumoli. It is made up of two main subareas: the perimediana dialect area, covering Perugia, Ancona, northeastern Umbria, and Lazio north of Rome, where varieties show greater structural proximity to Tuscan, and the mediana area (central Le Marche, Umbria, central-eastern Lazio varieties, the Sabine or Aquilano-Cicolano-Reatino dialect group). Our description focuses on the shared and diverging features of these groups, with particular reference to phonology, morphology, and syntax.
Article
Child Phonology
Yvan Rose
Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.
Article
Chinese Syllable Structure
Jisheng Zhang
Chinese is generally considered a monosyllabic language in that one Chinese character corresponds to one syllable and vice versa, and most characters can be used as free morphemes, although there is a tendency for words to be disyllabic. On the one hand, the syllable structure of Chinese is simple, as far as permissible sequences of segments are concerned. On the other hand, complexities arise when the status of the prenuclear glide is concerned and with respect to the phonotactic constraints between the segments. The syllabic affiliation of the prenuclear glide in the maximal CGVX Chinese syllable structure has long been a controversial issue.
Traditional Chinese phonology divides the syllable into shengmu (C) and yunmu, the latter consisting of medial (G), nucleus (V), and coda (X), which is either a high vowel (i/u) or a nasal (n/ŋ). This is known as the sheng-yun model, which translates to initial-final in English (IF in short). The traditional Chinese IF syllable model differs from the onset-rhyme (OR) syllable structure model in several aspects. In the former, the initial consists only of one consonant, excluding the glide, and the final—that is, everything after the initial consonant—is not the poetic rhyming unit which excludes the prenuclear glide; whereas in the latter, the onset includes a glide and the rhyme–that is, everything after the onset—is the poetic rhyming unit.
The Chinese traditional IF syllable model is problematic in itself. First, the final is ternary branching, which is not compatible with the binary principle in contemporary linguistics. Second, the nucleus+coda, as the poetic rhyming unit, is not structured as a constituent. Accordingly, the question arises of whether Chinese syllables can be analyzed in the OR model.
Many attempts have been made to analyze the Chinese prenuclear glide in the light of current phonological theories, particularly in the OR model, based on phonetic and phonological data on Chinese. Some such studies have proposed that the prenuclear glide occupies the second position in the onset. Others have proposed that the glide is part of the nucleus. Yet, others regard the glide as a secondary articulation of the onset consonant, while still others think of the glide as an independent branch directly linking to the syllable node. Also, some have proposed an IF model with initial for shengmu and final for yunmu, which binarily branches into G(lide) and R(hyme), consisting of N(ucleus) and C(oda). What is more, some have put forward a universal X-bar model of the syllable to replace the OR model, based on a syntactic X-bar structure. So far, there has been no authoritative finding that has conclusively decided the Chinese syllable structure.
Moreover, the syllable is the cross-linguistic domain for phonotactics . The number of syllables in Chinese is very much smaller than that in many other languages mainly because of the complicated phonotactics of the language, which strictly govern the segmental relations within CGVX. In the X-bar syllable structure, the Chinese phonotactic constraints which configure segmental relations in the syllable domain mirror the theta rules which capture the configurational relations between specifier and head and head and complement in syntax. On the whole, analysis of the complexities of the Chinese syllable will shed light on the cross-linguistic representation of syllable structure, making a significant contribution to phonological typology in general.
Article
Clinical Linguistics
Louise Cummings
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.
Article
Computational Phonology
Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology.
The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.
Article
Connectionism in Linguistic Theory
Xiaowei Zhao
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Article
Consonant Harmony
Gunnar Hansson
The term consonant harmony refers to a class of systematic sound patterns, in which consonants interact in some assimilatory way even though they are not adjacent to each other in the word. Such long-distance assimilation can sometimes hold across a significant stretch of intervening vowels and consonants, such as in Samala (Ineseño Chumash) /s-am-net-in-waʃ/ → [ʃamnetiniwaʃ] “they did it to you”, where the alveolar sibilant /s‑/ of the 3.sbj prefix assimilates to the postalveolar sibilant /ʃ/ of the past suffix /‑waʃ/ across several intervening syllables that contain a variety of non-sibilant consonants. While consonant harmony most frequently involves coronal-specific contrasts, like in the Samala case, there are numerous cases of assimilation in other phonological properties, such as laryngeal features, nasality, secondary articulation, and even constriction degree. Not all cases of consonant harmony result in overt alternations, like the [s] ∼ [ʃ] alternation in the Samala 3.sbj prefix. Sometimes the harmony is merely a phonotactic restriction on the shape of morphemes (roots) within the lexicon.
Consonant harmony tends to implicate only some group (natural class) of consonants that already share a number of features, and are hence relatively similar, while ignoring less similar consonants. The distance between the potentially interacting consonants can also play a role. For example, in many cases assimilation is limited to relatively short-distance ‘transvocalic’ contexts (. . . CVC. . . ), though the interpretation of such locality restrictions remains a matter of debate. Consonants that do not directly participate in the harmony (as triggers or undergoers of assimilation) are typically neutral and transparent, allowing the assimilating property to be propagated across them. However, this is not universally true; in recent years several cases have come to light in which certain segments can act as blockers when they intervene between a potential trigger-target pair.
The main significance of consonant harmony for linguistic theory lies in its apparently non-local character and the challenges that this poses for theories of phonological representations and processes, as well as for formal models of phonological learning. Along with other types of long-distance dependencies in segmental phonology (e.g., long-distance dissimilation, and vowel harmony systems with one or more transparent vowels), sound patterns of consonant harmony have contributed to the development of many theoretical constructs, such as autosegmental (nonlinear) representations, feature geometry, underspecification, feature spreading, strict locality (vs. ‘gapped’ representations), parametrized visibility, agreement constraints, and surface correspondence relations. The formal analysis of long-distance assimilation (and dissimilation) remains a rich and vibrant area of theoretical research. The empirical base for such theoretical inquiry also continues to be expanded. On the one hand, previously undocumented cases (or new, surprising details of known cases) continue to be added to the corpus of attested consonant harmony patterns. On the other hand, artificial phonology learning experiments allow the properties of typologically rare or unattested patterns to be explored in a controlled laboratory setting.