Paradigm Function Morphology (PFM) is an evolving approach to modeling morphological systems in a precise and enlightening way. The fundamental insight of PFM is that words have both content and form and that in the context of an appropriately organized lexicon, a language’s morphology deduces a complex word’s form from its content. PFM is therefore a realizational theory: a language’s grammar and lexicon are assumed to provide a precise characterization of a word’s content, from which the language’s morphology then projects the corresponding form. Morphemes per se have no role in this theory; by contrast, paradigms have the essential role of defining the content that is realized by a language’s morphology. At the core of PFM is the notion of a paradigm function, a formal representation of the relation between a word’s content and its form; the definition of a language’s paradigm function is therefore the definition of its inflectional morphology. Recent elaborations of this idea assume a distinction between content paradigms and form paradigms, which makes it possible to account for a fact that is otherwise irreconcilable with current morphological theory—the fact that the set of morphosyntactic properties that determines a word’s syntax and semantics often differs from the set of properties (some of them morphomic) that determines a word’s inflectional form. Another recent innovation is the assumption that affixes and rules of morphology may be complex in the sense that they may be factored into smaller affixes and rules; the evidence favoring this assumption is manifold.
Petar Milin and James P. Blevins
Studies of the structure and function of paradigms are as old as the Western grammatical tradition. The central role accorded to paradigms in traditional approaches largely reflects the fact that paradigms exhibit systematic patterns of interdependence that facilitate processes of analogical generalization. The recent resurgence of interest in word-based models of morphological processing and morphological structure more generally has provoked a renewed interest in paradigmatic dimensions of linguistic structure. Current methods for operationalizing paradigmatic relations and determining the behavioral correlates of these relations extend paradigmatic models beyond their traditional boundaries. The integrated perspective that emerges from this work is one in which variation at the level of individual words is not meaningful in isolation, but rather guides the association of words to paradigmatic contexts that play a role in their interpretation.
The category of Personal/Participant/Inhabitant derived nouns comprises a conglomeration of derived nouns that denote among others agents, instruments, patients/themes, inhabitants, and followers of a person. Based on the thematic relations between the derived noun and its base lexeme, Personal/Participant/Inhabitant nouns can be classified into two subclasses. The first subclass comprises derived nouns that are deverbal and carry thematic readings (e.g., driver). The second subclass consists of derived nouns with athematic readings (e.g., Marxist).
The examination of the category of Personal/Participant/Inhabitant nouns allows one to delve deeply into the study of multiplicity of meaning in word formation and the factors that bear on the readings of derived words. These factors range from the historical mechanisms that lead to multiplicity of meaning and the lexical-semantic properties of the bases that derived nouns are based on, to the syntactic context into which derived nouns occur, and the pragmatic-encyclopedic facets of both the base and the derived lexeme.
This paper provides an overview of polarity phenomena in human languages. There are three prominent paradigms of polarity items: negative polarity items (NPIs), positive polarity items (PPIs), and free choice items (FCIs). What they all have in common is that they have limited distribution: they cannot occur just anywhere, but only inside the scope of licenser, which is negation and more broadly a nonveridical licenser, PPIs, conversely, must appear outside the scope of negation. The need to be in the scope of a licenser creates a semantic and syntactic dependency, as the polarity item must be c-commanded by the licenser at some syntactic level. Polarity, therefore, is a true interface phenomenon and raises the question of well-formedness that depends on both semantics and syntax.
Nonveridical polarity contexts can be negative, but also non-monotonic such as modal contexts, questions, other non-assertive contexts (imperatives, subjunctives), generic and habitual sentences, and disjunction. Some NPIs and FCIs appear freely in these contexts in many languages, and some NPIs prefer negative contexts. Within negative licensers, we make a distinction between classically and minimally negative contexts. There are no NPIs that appear only in minimally negative contexts.
The distributions of NPIs and FCIs crosslinguistically can be understood in terms of general patterns, and there are individual differences due largely to the lexical semantic content of the polarity item paradigms. Three general patterns can be identified as possible lexical sources of polarity. The first is the presence of a dependent variable in the polarity item—a property characterizing NPIs and FCIs in many languages, including Greek, Mandarin, and Korean. Secondly, the polarity item may be scalar: English any and FCIs can be scalar, but Greek, Korean, and Mandarin NPIs are not. Finally, it has been proposed that NPIs can be exhaustive, but exhaustivity is hard to precisely identify in a non-stipulative way, and does not characterize all NPIs. NPIs that are not exhaustive tend to be referentially vague, which means that the speaker uses them only if she is unable to identify a specific referent for them.
Agustín Vicente and Ingrid L. Falkum
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice.
Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses?
Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.
Daniel Schmidtke and Victor Kuperman
Lexical representations in an individual mind are not given to direct scrutiny. Thus, in their theorizing of mental representations, researchers must rely on observable and measurable outcomes of language processing, that is, perception, production, storage, access, and retrieval of lexical information. Morphological research pursues these questions utilizing the full arsenal of analytical tools and experimental techniques that are at the disposal of psycholinguistics. This article outlines the most popular approaches, and aims to provide, for each technique, a brief overview of its procedure in experimental practice. Additionally, the article describes the link between the processing effect(s) that the tool can elicit and the representational phenomena that it may shed light on. The article discusses methods of morphological research in the two major human linguistic faculties—production and comprehension—and provides a separate treatment of spoken, written and sign language.
Scrambling is one of the most widely discussed and prominent factors affecting word order variation in Korean. Scrambling in Korean exhibits various syntactic and semantic properties that cannot be subsumed under the standard A/A'-movement. Clause-external scrambling as well as clause-internal scrambling in Korean show mixed A/A'-effects in a range of tests such as anaphor binding, weak crossover, Condition C, negative polarity item licensing, wh-licensing, and scopal interpretation. VP-internal scrambling, by contrast, is known to be lack of reconstruction effects conforming to the claim that short scrambling is A-movement. Clausal scrambling, on the other hand, shows total reconstructions effects, unlike phrasal scrambling. The diverse properties of Korean scrambling have received extensive attention in the literature. Some studies argue that scrambling is a type of feature-driven A-movement with special reconstruction effects. Others argue that scrambling can be A-movement or A'-movement depending on the landing site. Yet others claim that scrambling is not standard A/A'-movement, but must be treated as cost-free movement with optional reconstruction effects. Each approach, however, faces non-trivial empirical and theoretical challenges, and further study is needed to understand the complex nature of scrambling. As the theory develops in the Minimalist Program, a variety of proposals have also been advanced to capture properties of scrambling without resorting to A/A'-distinctions.
Scrambling in Korean applies optionally but not randomly. It may be blocked due to various factors in syntax and its interfaces in the grammar. At the syntax proper, scrambling obeys general constraints on movement (e.g., island conditions, left branch condition, coordinate structure condition, proper binding condition, ban on string vacuous movement). Various semantic and pragmatic factors (e.g., specificity, presuppositionality, topic, focus) also play a crucial role in acceptability of sentences with scrambling. Moreover, current studies show that certain instances of scrambling are filtered out at the interface due to cyclic Spell-out and linearization, which strengthens the claim that scrambling is not a free option. Data from Korean pose important challenges against base-generation approaches to scrambling, and lend further credence to the view that scrambling is an instance of movement. The exact nature of scrambling in Korean—whether it is cost-free or feature-driven—must be further investigated in future research, however. The research on Korean scrambling leads us to the pursuit of a general theory, which covers obligatory A/A'-movement as well as optional displacement with mixed semantic effects in languages with free word order.
This survey article discusses two basic issues that semantic theories of questions face. The first is how to conceptualize and formally represent the semantic content of questions. This issue arises in particular because the standard truth-conditional notion of meaning, which has been fruitful in the analysis of declarative statements, is not applicable to questions. This is because questions are not naturally construed as being true or false. Instead, it has been proposed that the semantic content of a question must be characterized in terms of its answerhood or resolution conditions. This article surveys a number of theories which develop this basic idea in different ways, focusing on so-called proposition-set theories (alternative semantics, partition semantics, and inquisitive semantics).
The second issue that will be considered here concerns questions that are embedded within larger sentences. Within this domain, one important puzzle is why certain predicates can take both declarative and interrogative complements (e.g., Bill knows that Mary called / Bill knows who called), while others take only declarative complements (e.g., Bill thinks that Mary called / *Bill thinks who called) or only interrogative complements (e.g., Bill wonders who called / *Bill wonders that Mary called). We compare two general approaches that have been pursued in the literature. One assumes that declarative and interrogative complements differ in semantic type. On this approach, the fact that predicates like think do not take interrogative complements can be accounted for by assuming that such complements do not have the semantic type that think selects for. The other approach treats the two kinds of complement as having the same semantic type, and seeks to connect the selectional restrictions of predicates like think to other semantic properties (e.g., the fact that think is neg-raising).
The morpheme was the central notion in morphological theorizing in the 20th century. It has a very intuitive appeal as the indivisible and invariant unit of form and meaning, a minimal linguistic sign. Ideally, that would be all there is to build words and sentences from. But this ideal does not appear to be entirely adequate. At least at a perhaps superficial understanding of form as a series of phonemes, and of meaning as concepts and morphosyntactic feature sets, the form and the meaning side of words are often not structured isomorphically. Different analytical reactions are possible to deal with the empirical challenges resulting from the various kinds of non-isomorphism between form and meaning. One prominent option is to reject the morpheme and to recognize conceptually larger units such as the word or the lexeme and its paradigm as the operands of morphological theory. This contrasts with various theoretical options maintaining the morpheme, terminologically or at least conceptually at some level. One such option is to maintain the morpheme as a minimal unit of form, relaxing the tension imposed by the meaning requirement. Another option is to maintain it as a minimal morphosyntactic unit, relaxing the requirements on the form side. The latter (and to a lesser extent also the former) has been understood in various profoundly different ways: association of one morpheme with several form variants, association of a morpheme with non-self-sufficient phonological units, or association of a morpheme with a formal process distinct from affixation. Variants of all of these possibilities have been entertained and have established distinct schools of thought. The overall architecture of the grammar, in particular the way that the morphology integrates with the syntax and the phonology, has become a driving force in the debate. If there are morpheme-sized units, are they pre-syntactic or post-syntactic units? Is the association between meaning and phonological information pre-syntactic or post-syntactic? Do morpheme-sized pieces have a specific status in the syntax? Invoking some of the main issues involved, this article draws a profile of the debate, following the term morpheme on a by-and-large chronological path from the late 19th century to the 21st century.
Beata Moskal and Peter W. Smith
Headedness is a pervasive phenomenon throughout different components of the grammar, which fundamentally encodes an asymmetry between two or more items, such that one is in some sense more important than the other(s). In phonology for instance, the nucleus is the head of the syllable, and not the onset or the coda, whereas in syntax, the verb is the head of a verb phrase, rather than any complements or specifiers that it combines with. It makes sense, then, to question whether the notion of headedness applies to the morphology as well; specifically, do words—complex or simplex—have heads that determine the properties of the word as a whole? Intuitively it makes sense that words have heads: a noun that is derived from an adjective like redness can function only as a noun, and the presence of red in the structure does not confer on the whole form the ability to function as an adjective as well.
However, this question is a complex one for a variety of reasons. While it seems clear for some phenomena such as category determination that words have heads, there is a lot of evidence to suggest that the properties of complex words are not all derived from one morpheme, but rather that the features are gathered from potentially numerous morphemes within the same word. Furthermore, properties that characterize heads compared to dependents, particularly based on syntactic behavior, do not unambigously pick out a single element, but the tests applied to morphology at times pick out affixes, and at times pick out bases as the head of the whole word.
Subtraction consists in shortening the shape of the word. It operates on morphological bases such as roots, stems, and words in word-formation and inflection. Cognitively, subtraction is the opposite of affixation, since the latter adds meaning and form (an overt affix) to roots, stems, or words, while the former adds meaning through subtraction of form. As subtraction and affixation work at the same level of grammar (morphology), they sometimes compete for the expression of the same semantics in the same language, for example, the pattern ‘science—scientist’ in German has derivations such as Physik ‘physics’—Physik-er ‘physicist’ and Astronom-ie ‘astronomy’—Astronom ‘astronomer’. Subtraction can delete phonemes and morphemes. In case of phoneme deletion, it is usually the final phoneme of a morphological base that is deleted and sometimes that phoneme can coincide with a morpheme.
Some analyses of subtraction(-like shortenings) rely not on morphological units (roots, stems, morphological words, affixes) but on the phonological word, which sometimes results in alternative definitions of subtraction. Additionally, syntax-based theories of morphology that do not recognize a morphological component of grammar and operate only with additive syntactic rules claim that subtraction actually consists in addition of defective phonological material that causes adjustments in phonology and leads to deletion of form on the surface. Other scholars postulate subtraction only if the deleted material does not coincide with an existing morpheme elsewhere in the language and if it does, they call the change backformation. There is also some controversy regarding what is a proper word-formation process and whether what is derived by subtraction is true word-formation or just marginal or extragrammatical morphology; that is, the question is whether shortenings such as hypocoristics and clippings should be treated on par with derivations such as, for example, the pattern of science-scientist.
Finally, research in subtraction also faces terminology issues in the sense that in the literature different labels have been used to refer to subtraction(-like) formations: minus feature, minus formation, disfixation, subtractive morph, (subtractive) truncation, backformation, or just shortening.
Ur Shlonsky and Giuliano Bocci
Syntactic cartography emerged in the 1990s as a result of the growing consensus in the field about the central role played by functional elements and by morphosyntactic features in syntax. The declared aim of this research direction is to draw maps of the structures of syntactic constituents, characterize their functional structure, and study the array and hierarchy of syntactically relevant features. Syntactic cartography has made significant empirical discoveries, and its methodology has been very influential in research in comparative syntax and morphosyntax. A central theme in current cartographic research concerns the source of the emerging featural/structural hierarchies. The idea that the functional hierarchy is not a primitive of Universal Grammar but derives from other principles does not undermine the scientific relevance of the study of the cartographic structures. On the contrary, the cartographic research aims at providing empirical evidence that may help answer these questions about the source of the hierarchy and shed light on how the computational principles and requirements of the interface with sound and meaning interact.
A root is a fundamental minimal unit in words. Some languages do not allow their roots to appear on their own, as in the Semitic languages where roots consist of consonant clusters that become stems or words by virtue of vowel insertion. Other languages appear to allow roots to surface without any additional morphology, as in English car. Roots are typically distinguished from affixes in that affixes need a host, although this varies within different theories.
Traditionally roots have belonged to the domain of morphology. More recently, though, new theories have emerged according to which words are decomposed and subject to the same principles as sentences. That makes roots a fundamental building block of sentences, unlike words. Contemporary syntactic theories of roots hold that they have little if any grammatical information, which raises the question of how they acquire their seemingly grammatical properties. A central issue has revolved around whether roots have a lexical category inherently or whether they are given a lexical category in some other way. Two main theories are distributed morphology and the exoskeletal approach to grammar. The former holds that roots merge with categorizers in the grammar: a root combined with a nominal categorizer becomes a noun, and a root combined with a verbal categorizer becomes a verb. On the latter approach, it is argued that roots are inserted into syntactic structures which carry the relevant category, meaning that the syntactic environment is created before roots are inserted into the structure. The two views make different predictions and differ in particular in their view of the status of empty categorizers.
Heidi Harley and Shigeru Miyagawa
Ditransitive predicates select for two internal arguments, and hence minimally entail the participation of three entities in the event described by the verb. Canonical ditransitive verbs include give, show, and teach; in each case, the verb requires an agent (a giver, shower, or teacher, respectively), a theme (the thing given, shown, or taught), and a goal (the recipient, viewer, or student). The property of requiring two internal arguments makes ditransitive verbs syntactically unique. Selection in generative grammar is often modeled as syntactic sisterhood, so ditransitive verbs immediately raise the question of whether a verb may have two sisters, requiring a ternary-branching structure, or whether one of the two internal arguments is not in a sisterhood relation with the verb.
Another important property of English ditransitive constructions is the two syntactic structures associated with them. In the so-called “double object construction,” or DOC, the goal and theme both are simple NPs and appear following the verb in the order V-goal-theme. In the “dative construction,” the goal is a PP rather than an NP and follows the theme in the order V-theme-to goal. Many ditransitive verbs allow both structures (e.g., give John a book/give a book to John). Some verbs are restricted to appear only in one or the other (e.g. demonstrate a technique to the class/*demonstrate the class a technique; cost John $20/*cost $20 to John). For verbs which allow both structures, there can be slightly different interpretations available for each. Crosslinguistic results reveal that the underlying structural distinctions and their interpretive correlates are pervasive, even in the face of significant surface differences between languages. The detailed analysis of these questions has led to considerable progress in generative syntax. For example, the discovery of the hierarchical relationship between the first and second arguments of a ditransitive has been key in motivating the adoption of binary branching and the vP hypothesis. Many outstanding questions remain, however, and the syntactic encoding of ditransitivity continues to inform the development of grammatical theory.
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.
Stergios Chatzikyriakidis and Robin Cooper
Type theory is a regime for classifying objects (including events) into categories called types. It was originally designed in order to overcome problems relating to the foundations of mathematics relating to Russell’s paradox. It has made an immense contribution to the study of logic and computer science and has also played a central role in formal semantics for natural languages since the initial work of Richard Montague building on the typed λ-calculus. More recently, type theories following in the tradition created by Per Martin-Löf have presented an important alternative to Montague’s type theory for semantic analysis. These more modern type theories yield a rich collection of types which take on a role of representing semantic content rather than simply structuring the universe in order to avoid paradoxes.
Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning.
In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.
Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-
Matthew J. Gordon
William Labov (b. 1927) is an American linguist who pioneered the study of variationist sociolinguistics. Born and raised in northern New Jersey, Labov studied English and philosophy at Harvard University (BA, 1948) and worked as an industrial chemist for several years before entering graduate school in linguistics at Columbia University in 1961. He completed his PhD in 1964, under the direction of Uriel Weinreich. He worked at Columbia until 1971, when he joined the faculty of the University of Pennsylvania, where he taught until his retirement in 2014.
Labov’s influence on the field began with research he conducted in graduate school. His study of changing pronunciations on Martha’s Vineyard, the subject of his master’s thesis, introduced a method for observing sound change in progress and broke with tradition by exploring social motivations for linguistic innovations. For his PhD dissertation, Labov carried out a study of dialect patterns on the Lower East Side of New York City. Using a systematic, quantitative methodology, he demonstrated that linguistic variation is socially stratified, such that the use of pronunciation features (e.g., dropping of post-vocalic /r/) correlates with social class, ethnicity, etc. in regular patterns. Labov’s early research was greatly influential and inspired many scholars to carry out similar projects in other communities. The paradigm came to be known as variationist sociolinguistics.
Much of Labov’s scholarship seeks to advance our understanding of language change. Historical linguists traditionally study completed linguistic changes, often long after they occurred, but Labov developed a method for examining active changes through a quantitative comparison of speakers representing several generations. This approach produces a new perspective on the change process by revealing intermediate stages. Labov has brought insights from this research to bear on theoretical debates within historical linguistics and the field more broadly. His work in this area has also documented many active sound changes in American English. Among these changes are innovations underway in particular dialects, such as the vowel changes in Philadelphia, as well as broader regional patterns, such as the Northern Cities Shift heard in the Great Lakes states.
Throughout his career, social justice concerns have fueled Labov’s research. He has sought to demonstrate that the speech of stigmatized groups is as systematic and rule-governed as any other. He led a pioneering study in Harlem in the late 1960s that shone new light on African American English, demonstrating, for example, that grammatical usages like the deletion of the copula (e.g., He fast) are subject to regular constraints. Labov has served as an expert witness in court and before the U.S. Congress to share insights from his study of African American English. He has also worked to promote literacy for speakers of non-standard dialects, carrying out research on reading and developing material for the teaching of reading to these populations.