Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score.
The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions.
An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory.
One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments.
The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages.
Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.
Article
Autosegmental Phonology
William R. Leben
Article
Morpho-Phonological Processes in Korean
Jongho Jun
It has been an ongoing issue within generative linguistics how to properly analyze morpho-phonological processes. Morpho-phonological processes typically have exceptions, but nonetheless they are often productive. Such productive, but exceptionful, processes are difficult to analyze, since grammatical rules or constraints are normally invoked in the analysis of a productive pattern, whereas exceptions undermine the validity of the rules and constraints. In addition, productivity of a morpho-phonological process may be gradient, possibly reflecting the relative frequency of the relevant pattern in the lexicon. Simple lexical listing of exceptions as suppletive forms would not be sufficient to capture such gradient productivity of a process with exceptions. It is then necessary to posit grammatical rules or constraints even for exceptionful processes as long as they are at least in part productive. Moreover, the productivity can be correctly estimated only when the domain of rule application is correctly identified. Consequently, a morpho-phonological process cannot be properly analyzed unless we possess both the correct description of its application conditions and the appropriate stochastic grammatical mechanisms to capture its productivity.
The same issues arise in the analysis of morpho-phonological processes in Korean, in particular, n-insertion, sai-siot, and vowel harmony. Those morpho-phonological processes have many exceptions and variations, which make them look quite irregular and unpredictable. However, they have at least a certain degree of productivity. Moreover, the variable application of each process is still systematic in that various factors, phonological, morphosyntactic, sociolinguistic, and processing, contribute to the overall probability of rule application. Crucially, grammatical rules and constraints, which have been proposed within generative linguistics to analyze categorical and exceptionless phenomena, may form an essential part of the analysis of the morpho-phonological processes in Korean.
For an optimal analysis of each of the morpho-phonological processes in Korean, the correct conditions and domains for its application need to be identified first, and its exact productivity can then be measured. Finally, the appropriate stochastic grammatical mechanisms need to be found or developed in order to capture the measured productivity.
Article
Zero Morphemes
Eystein Dahl and Antonio Fábregas
Zero or null morphology refers to morphological units that are devoid of phonological content. Whether such entities should be postulated is one of the most controversial issues in morphological theory, with disagreements in how the concept should be delimited, what would count as an instance of zero morphology inside a particular theory, and whether such objects should be allowed even as mere analytical instruments.
With respect to the first problem, given that zero morphology is a hypothesis that comes from certain analyses, delimiting what counts as a zero morpheme is not a trivial matter. The concept must be carefully differentiated from others that intuitively also involve situations where there is no overt morphological marking: cumulative morphology, phonological deletion, etc.
About the second issue, what counts as null can also depend on the specific theories where the proposal is made. In the strict sense, zero morphology involves a complete morphosyntactic representation that is associated to zero phonological content, but there are other notions of zero morphology that differ from the one discussed here, such as absolute absence of morphological expression, in addition to specific theory-internal interpretations of what counts as null. Thus, it is also important to consider the different ways in which something can be morphologically silent.
Finally, with respect to the third side of the debate, arguments are made for and against zero morphology, notably from the perspectives of falsifiability, acquisition, and psycholinguistics. Of particular impact is the question of which properties a theory should have in order to block the possibility that zero morphology exists, and conversely the properties that theories that accept zero morphology associate to null morphemes.
An important ingredient in this debate has to do with two empirical domains: zero derivation and paradigmatic uniformity. Ultimately, the plausibility that zero morphemes exist or not depends on the success at accounting for these two empirical patterns in a better way than theories that ban zero morphology.
Article
Okinawan Language
Shinsho Miyara
Within the Ryukyuan branch of the Japonic family of languages, present-day Okinawan retains numerous regional variants which have evolved for over a thousand years in the Ryukyuan Archipelago. Okinawan is one of the six Ryukyuan languages that UNESCO identified as endangered. One of the theoretically fascinating features is that there is substantial evidence for establishing a high central phonemic vowel in Okinawan although there is currently no overt surface [ï]. Moreover, the word-initial glottal stop [ʔ] in Okinawan is more salient than that in Japanese when followed by vowels, enabling recognition that all Okinawan words are consonant-initial. Except for a few particles, all Okinawan words are composed of two or more morae. Suffixation or vowel lengthening (on nouns, verbs, and adjectives) provides the means for signifying persons as well as things related to human consumption or production. Every finite verb in Okinawan terminates with a mood element. Okinawan exhibits a complex interplay of mood or negative elements and focusing particles. Evidentiality is also realized as an obligatory verbal suffix.
Article
Syntax–Phonology Interface
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Article
Acceptability Judgments
James Myers
Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.
Article
Early Modern English
Terttu Nevalainen
In the Early Modern English period (1500–1700), steps were taken toward Standard English, and this was also the time when Shakespeare wrote, but these perspectives are only part of the bigger picture. This chapter looks at Early Modern English as a variable and changing language not unlike English today. Standardization is found particularly in spelling, and new vocabulary was created as a result of the spread of English into various professional and occupational specializations. New research using digital corpora, dictionaries, and databases reveals the gradual nature of these processes. Ongoing developments were no less gradual in pronunciation, with processes such as the Great Vowel Shift, or in grammar, where many changes resulted in new means of expression and greater transparency. Word order was also subject to gradual change, becoming more fixed over time.
Article
Noam Chomsky
Howard Lasnik and Terje Lohndal
Noam Avram Chomsky is one of the central figures of modern linguistics. He was born in Philadelphia, Pennsylvania on December 7, 1928. In 1945, Chomsky enrolled in the University of Pennsylvania, where he met Zellig Harris (1909–1992), a leading Structuralist, through their shared political interests. His first encounter with Harris’s work was when he proof-read Harris’s book Methods in Structural Linguistics, published in 1951 but completed already in 1947. Chomsky grew dissatisfied with Structuralism and started to develop his own major idea that syntax and phonology are in part matters of abstract representations. This was soon combined with a psychobiological view of language as a unique part of the mind/brain.
Chomsky spent 1951–1955 as a Junior Fellow of the Harvard Society of Fellows, after which he joined the faculty at MIT under the sponsorship of Morris Halle. He was promoted to full professor of Foreign Languages and Linguistics in 1961, appointed Ferrari Ward Professor of Linguistics in 1966, and Institute Professor in 1976, retiring in 2002. Chomsky is still remarkably active, publishing, teaching, and lecturing across the world.
In 1967, both the University of Chicago and the University of London awarded him honorary degrees, and since then he has been the recipient of scores of honors and awards. In 1988, he was awarded the Kyoto Prize in basic science, created in 1984 in order to recognize work in areas not included among the Nobel Prizes. These honors are all a testimony to Chomsky’s influence and impact in linguistics and cognitive science more generally over the past 60 years. His contributions have of course also been heavily criticized, but nevertheless remain crucial to investigations of language.
Chomsky’s work has always centered around the same basic questions and assumptions, especially that human language is an inherent property of the human mind. The technical part of his research has continuously been revised and updated. In the 1960s phrase structure grammars were developed into what is known as the Standard Theory, which transformed into the Extended Standard Theory and X-bar theory in the 1970s. A major transition occurred at the end of the 1970s, when the Principles and Parameters Theory emerged. This theory provides a new understanding of the human language faculty, focusing on the invariant principles common to all human languages and the points of variation known as parameters. Its recent variant, the Minimalist Program, pushes the approach even further in asking why grammars are structured the way they are.
Article
Old and Middle Japanese
Bjarke Frellesvig
Old and Middle Japanese are the pre-modern periods of the attested history of the Japanese language. Old Japanese (OJ) is largely the language of the 8th century, with a modest, but still significant number of written sources, most of which is poetry. Middle Japanese is divided into two distinct periods, Early Middle Japanese (EMJ, 800–1200) and Late Middle Japanese (LMJ, 1200–1600). EMJ saw most of the significant sound changes that took place in the language, as well as profound influence from Chinese, whereas most grammatical changes took place between the end of EMJ and the end of LMJ. By the end of LMJ, the Japanese language had reached a form that is not significantly different from present-day Japanese.
OJ phonology was simple, both in terms of phoneme inventory and syllable structure, with a total of only 88 different syllables. In EMJ, the language became quantity sensitive, with the introduction of a long versus short syllables. OJ and EMJ had obligatory verb inflection for a number of modal and syntactic categories (including an important distinction between a conclusive and an (ad)nominalizing form), whereas the expression of aspect and tense was optional. Through late EMJ and LMJ this system changed completely to one without nominalizing inflection, but obligatory inflection for tense.
The morphological pronominal system of OJ was lost in EMJ, which developed a range of lexical and lexically based terms of speaker and hearer reference. OJ had a two-way (speaker–nonspeaker) demonstrative system, which in EMJ was replaced by a three-way (proximal–mesial–distal) system.
OJ had a system of differential object marking, based on specificity, as well as a word order rule that placed accusative marked objects before most subjects; both of these features were lost in EMJ. OJ and EMJ had genitive subject marking in subordinate clauses and in focused, interrogative and exclamative main clauses, but no case marking of subjects in declarative, optative, or imperative main clauses and no nominative marker. Through LMJ genitive subject marking was gradually circumscribed and a nominative case particle was acquired which could mark subjects in all types of clauses.
OJ had a well-developed system of complex predicates, in which two verbs jointly formed the predicate of a single clause, which is the source of the LMJ and NJ (Modern Japanese) verb–verb compound complex predicates. OJ and EMJ also had mono-clausal focus constructions that functionally were similar to clefts in English; these constructions were lost in LMJ.
Article
Clinical Linguistics
Louise Cummings
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.