Malka Rappaport Hovav
Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.
Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century
Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.
Andrej L. Malchukov
Morphological case is conventionally defined as a system of marking of a dependent nominal for the type of relationship they bear to their heads. While most linguists would agree with this definition, in practice it is often a matter of controversy whether a certain marker X counts as case in language L, or how many case values language L features. First, the distinction between morphological cases and case particles/adpositions is fuzzy in a cross-linguistic perspective. Second, the distinctions between cases can be obscured by patterns of case syncretism, leading to different analyses of the underlying system. On the functional side, it is important to distinguish between syntactic (structural), semantic, and “pragmatic” cases, yet these distinctions are not clear-cut either, as syntactic cases historically arise from the two latter sources. Moreover, case paradigms of individual languages usually show a conflation between syntactic, semantic, and pragmatic cases (see the phenomenon of “focal ergativity,” where ergative case is used when the A argument is in focus). The composition of case paradigms can be shown to follow a certain typological pattern, which is captured by case hierarchy, as proposed by Greenberg and Blake, among others. Case hierarchy constrains the way how case systems evolve (or are reduced) across languages and derives from relative markedness and, ultimately, from frequencies of individual cases. The (one-dimensional) case hierarchy is, however, incapable of capturing all recurrent polysemies of individual case markers; rather, such polysemies can be represented through a more complex two-dimensional hierarchy (semantic map), which can also be given a diachronic interpretation.
Jessica Coon and Clint Parker
The phenomenon of case has been studied widely at both the descriptive and theoretical levels. Typological work on morphological case systems has provided a picture of the variability of case cross-linguistically. In particular, languages may differ with respect to whether or not arguments are marked with overt morphological case, the inventory of cases with which they may be marked, and the alignment of case marking (e.g., nominative-accusative vs. ergative-absolutive). In the theoretical realm, not only has morphological case been argued to play a role in multiple syntactic phenomena, but current generative work also debates the role of abstract case (i.e., Case) in the grammar: abstract case features have been proposed to underlie morphological case, and to license nominals in the derivation.
The phenomenon of case has been argued to play a role in at least three areas of the syntax reviewed here: (a) agreement, (b) A-movement, and (c) A’-movement. Morphological case has been shown to determine a nominal argument’s eligibility to participate in verbal agreement, and recent work has argued that languages vary as to whether movement to subject position is case-sensitive. As for case-sensitive A’-movement, recent literature on ergative extraction restrictions debates whether this phenomenon should be seen as another instance of “case discrimination” or whether the pattern arises from other properties of ergative languages. Finally, other works discussed here have examined agreement and A’-extraction patterns in languages with no visible case morphology. The presence of patterns and typological gaps—both in languages with overt morphological case and in those without it—lends support to the relevance of abstract case in the syntax.
Clitics can be defined as prosodically defective function words. They can belong to a number of syntactic categories, such as articles, pronouns, prepositions, complementizers, negative adverbs, or auxiliaries. They do not generally belong to open classes, like verbs, nouns, or adjectives. Their prosodically defective character is most often manifested by the absence of stress, which in turn correlates with vowel reduction in those languages that have it independently; sometimes the clitic can be just a consonant or a consonant cluster, with no vowel. This same prosodically defective character forces them to attach either to the word that follows them (proclisis) or to the word that precedes them (enclisis); in some cases they even appear inside a word (mesoclisis or endoclisis). The word to which a clitic attaches is called the host. In some languages (like some dialects of Italian or Catalan) enclitics can surface as stressed, but the presence of stress can be argued to be the result of assignment of stress to the host-clitic complex, not to the clitic itself. One consequence of clitics being prosodically defective is that they cannot be the sole element of an utterance, for instance as an answer to some question; they need to always appear with a host.
A useful distinction is that between simple clitics and special clitics. Simple clitics often have a nonclitic variant and appear in the expected syntactic position for nonclitics of their syntactic category. Much more attention has been paid in the literature to special clitics. Special clitics appear in a designated position within the clause or within the noun phrase (or determiner phrase). In several languages certain clitics must appear in second position, within the clause, as in most South Slavic languages, or within the noun phrase, as in Kwakw'ala. The pronominal clitics of Romance languages or Greek must have the verb as a host and appear in a position different from the full noun phrase. A much debated question is whether the position of special clitics is the result of syntactic movement, or whether other factors, morphological or phonological, intervene as well or are the sole motivation for their position. Clitics can also cluster, with some languages allowing only sequences of two clitics, and other languages allowing longer sequences. Here one relevant question is what determines the order of the clitics, with the main avenues of analysis being approaches based on syntactic movement, approaches based on the types of morphosyntactic features each clitic has, and approaches based on templates. An additional issue concerning clitic clusters is the incompatibility between specific clitics when combined and the changes that this incompatibility can provoke in the form of one or more of the clitics. Combinations of identical or nearly identical clitics are often disallowed, and the constraint known as the Person-Case Constraint (PCC) disallows combinations of clitics with a first or second person accusative clitic (a direct object, DO, clitic) and a third person (and sometimes also first or second person) dative clitic (an indirect object, IO, clitic). In all these cases either one of the clitics surfaces with the form of another clitic or one of the clitics does not surface; sometimes there is no possible output. Here again both syntactic and morphological approaches have been proposed.
Dany Amiot and Edwige Dugas
Word-formation encompasses a wide range of processes, among which we find derivation and compounding, two processes yielding productive patterns which enable the speaker to understand and to coin new lexemes. This article draws a distinction between two types of constituents (suffixes, combining forms, splinters, affixoids, etc.) on the one hand and word-formation processes (derivation, compounding, blending, etc.) on the other hand but also shows that a given constituent can appear in different word-formation processes. First, it describes prototypical derivation and compounding in terms of word-formation processes and of their constituents: Prototypical derivation involves a base lexeme, that is, a free lexical elements belonging to a major part-of-speech category (noun, verb, or adjective) and, very often, an affix (e.g., Fr. laverV ‘to wash’ > lavableA ‘washable’), while prototypical compounding involves two lexemes (e.g., Eng. rainN + fallV > rainfallN ).
The description of these prototypical phenomena provides a starting point for the description of other types of constituents and word-formation processes. There are indeed at least two phenomena which do not meet this description, namely, combining forms (henceforth CFs) and affixoids, and which therefore pose an interesting challenge to linguistic description, be it synchronic or diachronic. The distinction between combining forms and affixoids is not easy to establish and the definitions are often confusing, but productivity is a good criterion to distinguish them from each other, even if it does not answer all the questions raised by bound forms.
In the literature, the notions of CF and affixoid are not unanimously agreed upon, especially that of affixoid. Yet this article stresses that they enable us to highlight, and even conceptualize, the gradual nature of linguistic phenomena, whether from a synchronic or a diachronic point of view.
Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology.
The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Construction Morphology is a theory of word structure in which the complex words of a language are analyzed as constructions, that is, systematic pairings of form and meaning. These pairings are analyzed within a Tripartite Parallel Architecture conception of grammar. This presupposes a word-based approach to the analysis of morphological structure and a strong dependence on paradigmatic relations between words. The lexicon contains both words and the constructional schemas they are instantiations of. Words and schemas are organized in a hierarchical network, with intermediate layers of subschemas. These schemas have a motivating function with respect to existing complex words and specify how new complex words can be formed.
The consequence of this view of morphology is that there is no sharp boundary between lexicon and grammar. In addition, the use of morphological patterns may also depend on specific syntactic constructions (construction-dependent morphology).
This theory of lexical relatedness also provides insight into language change such as the use of obsolete case markers as markers of specific constructions, the change of words into affixes, and the debonding of word constituents into independent words. Studies of language acquisition and word processing confirm this view of the lexicon and the nature of lexical knowledge.
Construction Morphology is also well equipped for dealing with inflection and the relationships between the cells of inflectional paradigms, because it can express how morphological schemas are related paradigmatically.
William F. Hanks
Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus,
(1)A:Oh, there’sthat guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy)
(2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer)
In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?
Denominal verbs are verbs formed from nouns by means of various word-formation processes such as derivation, conversion, or less common mechanisms like reduplication, change of pitch, or root and pattern. Because their well-formedness is determined by morphosyntactic, phonological, and semantic constraints, they have been analyzed from a variety of lexicalist and non-lexicalist perspectives, including Optimality Theory, Lexical Semantics, Cognitive Grammar, Onomasiology, and Neo-Construction Grammar. Independently of their structural shape, denominal verbs have in common that they denote events in which the referents of their base nouns (e.g., computer in the case of computerize) participate in a non-arbitrary way. While traditional labels like ‘ornative’, ‘privative’, ‘locative’, ‘instrumental’ and the like allow for a preliminary classification of denominal verbs, a more formal description has to account for at least three basic aspects, namely (1) competition among functionally similar word-formation patterns, (2) the polysemy of affixes, which precludes a neat one-to-one relation between derivatives displaying a particular affix and a particular semantic class, and (3) the relevance of generic knowledge and contextual information for the interpretation of (innovative) denominal verbs.
Deponency refers to mismatches between morphological form and syntactic function (or “meaning”), such that a given morphological exponent appears in a syntactic environment that is unexpected from the point of view of its canonical (“normal” or “expected”) function. This phenomenon takes its name from Latin, where certain morphologically “passive” verbs appear in syntactically active contexts (for example, hort-or ‘I encourage’, with the same ending as passive am-or ‘I am loved’), but it occurs in other languages as well. Moreover, the term has been extended to include mismatches in other domains, such as number mismatches in nominal morphology or tense mismatches on verbs (e.g., in the Germanic preterite-presents). Theoretical treatments of deponency vary from seeking a unified (and uniform) account of all observed mismatches to arguing that the wide range of cross-linguistically attested form-function mismatches does not form a natural class and does not require explanatory devices specific to the domain of morphology. It has also been argued that some apparent mismatches are “spurious” and have been misanalyzed.
Nevertheless, it is generally agreed across frameworks that however such “morphological mismatches” are to be analyzed, deponency has potential ramifications for theories of the syntax-morphology interface and (depending on one’s theoretical approach) the structure of the lexicon.
Displacement is a ubiquitous phenomenon in natural languages. Grammarians often speak of displacement in cases where the rules for the canonical word order of a language lead to the expectation of finding a word or phrase in a particular position in the sentence whereas it surfaces instead in a different position and the canonical position remains empty: ‘Which book did you buy?’ is an example of displacement because the noun phrase ‘which book’, which acts as the grammatical object in the question, does not occur in the canonical object position, which in English is after the verb. Instead, it surfaces at the beginning of the sentence and the object position remains empty. Displacement is often used as a diagnostic for constituent structure because it affects only (but not all) constituents. In the clear cases, displaced constituents show properties associated with two distinct linear and hierarchical positions. Typically, one of these two positions c-commands the other and the displaced element is pronounced in the c-commanding position. Displacement also shows strong interactions with the path between the empty canonical position and the position where the element is pronounced: one often encounters morphological changes along this path and evidence for structural placement of the displaced constituent, as well as constraints on displacement induced by the path.
The exact scope of displacement as an analytically unified phenomenon varies from theory to theory. If more then one type of syntactic displacement is recognized, the question of the interaction between movement types arises. Displacement phenomena are extensively studied by syntacticians. Their enduring interest derives from the fact that the complex interactions between displacement and other aspects of syntax offer a powerful probe into the inner workings and architecture of the human syntactic faculty.
Jonathan David Bobaljik
Distributed Morphology (DM) is a framework in theoretical morphology, characterized by two core tenets: (i) that the internal hierarchical structure of words is, in the first instance, syntactic (complex words are derived syntactically), and (ii) that the syntax operates on abstract morphemes, defined in terms of morphosyntactic features, and that the spell-out (realization, exponence) of these abstract morphemes occurs after the syntax. Distributing the functions of the classical morpheme in this way allows for analysis of mismatches between the minimal units of grammatical combination and the minimal units of sound. Much work within the framework is nevertheless guided by seeking to understand restrictions on such mismatches, balancing the need for the detailed description of complex morphological data in individual languages against an attempt to explain broad patterns in terms of restrictions imposed by grammatical principles.
This article revisits Grimshaw's (1990) tripartition of nominalization, which introduced an important correlation between particular types of nominalization and the readings associated with these nominal forms, Event and Referential. The article discusses criteria that may be used to distinguish between the two readings and the limitations of these criteria. It further offers a selective discussion of how different approaches to nominalization implement Event and Referential readings.
A fundamental difference in theoretical models of morphology and, particularly, of the syntax–morphology interface is that between endoskeletal and exoskeletal approaches. In the former, more traditional, endoskeletal approaches, open-class lexical items like cat or sing are held to be inherently endowed with a series of formal features that determine the properties of the linguistic expressions in which they appear. In the latter, more recent, exoskeletal approaches, it is rather the morphosyntactic configurations, independently produced by the combination of abstract functional elements, that determine those properties. Lexical items, in this latter approach, are part of the structure but, crucially, do not determine it.
Conceptually, although a correlation is usually made between endoskeletalism and lexicalism/projectionism, on the one hand, and between exoskeletalism and (neo)constructionism, on the other, things are actually more complicated, and some frameworks exist that seem to challenge those correlations, in particular when the difference between word and morpheme is taken into account.
Empirically, the difference between these two approaches to morphology and the morphology-syntax interface comes to light when one examines how each one treats a diversity of word-related phenomena: morphosyntactic category and category shift in derivational processes, inflectional class, nominal properties like mass or count, and verbal properties like agentivity and (a)telicity.
John E. Joseph
Ferdinand de Saussure (1857–1913), the founding figure of modern linguistics, made his mark on the field with a book he published a month after his 21st birthday, in which he proposed a radical rethinking of the original system of vowels in Proto-Indo-European. A year later, he submitted his doctoral thesis on a morpho-syntactic topic, the genitive absolute in Sanskrit, to the University of Leipzig. He went to Paris intending to do a second, French doctorate, but instead he was given responsibility for courses on Gothic and Old High Gerrman at the École Pratique des Hautes Études, and for managing the publications of the Société de Linguistique de Paris. He abandoned more than one large publication project of his own during the decade he spent in Paris. In 1891 he returned to his native Geneva, where the University created a chair in Sanskrit and the history and comparison of languages for him. He produced some significant work on Lithuanian during this period, connected to his early book on the Indo-European vowel system, and yielding Saussure’s Law, concerning the placement of stress in Lithuanian. He undertook writing projects about the general nature of language, but again abandoned them. In 1907, 1908–1909, and 1910–1911, he gave three courses in general linguistics at the University of Geneva, in which he developed an approach to languages as systems of signs, each sign consisting of a signifier (sound pattern) and a signified (concept), both of them mental rather than physical in nature, and conjoined arbitrarily and inseparably. The socially shared language system, or langue, makes possible the production and comprehension of parole, utterances, by individual speakers and hearers. Each signifier and signified is a value generated by its difference from all the other signifiers or signifieds with which it coexists on an associative (or paradigmatic) axis, and affected as well by its syntagmatic axis. Shortly after Saussure’s death at 55, two of his colleagues, Bally and Sechehaye, gathered together students’ notes from the three courses, as well as manuscript notes by Saussure, and from them constructed the Cours de linguistique générale, published in 1916. Over the course of the next several decades, this book became the basis for the structuralist approach, initially within linguistics, and later adapted to other fields. Saussure left behind a large quantity of manuscript material that has gradually been published over the last few decades, and continues to be published, shedding new light on his thought.
Olaf Koeneman and Hedde Zeijlstra
The relation between the morphological form of a pronoun and its semantic function is not always transparent, and syncretism abounds in natural languages. In a language like English, for instance, three types of indefinite pronouns can be identified, often grouped in series: the some-series, the any-series, and the no-series. However, this does not mean that there are also three semantic functions for indefinite pronouns. Haspelmath (1997), in fact distinguishes nine functions. Closer inspection shows that these nine functions must be reduced to four main functions of indefinites, each with a number of subfunctions: (i) Negative Polarity Items; (ii) Free-Choice Items; (iii) negative indefinites; and (iv) positive or existential indefinites. These functions and subfunctions can be morphologically realized differently across languages, but don’t have to. In English, functions (i) and (ii), unlike (iii) and (iv), may morphologically group together, both expressed by the any-series. Where morphological correspondences between the kinds of functions that indefinites may express call for a classification, such classifications turn out to be semantically well motivated too. Similar observations can be made for definite pronouns, where it turns out that various functions, such as the first person inclusive/exclusive distinction or dual number, are sometimes, but not always morphologically distinguished, showing that these may be subfunctions of higher, more general functions. The question as to how to demarcate the landscape of indefinite and definite pronouns thus does not depend on semantic differences alone: Morphological differences are at least as much telling. The interplay between morphological and semantic properties can provide serious answers to how to define indefinites and the various forms and functions that these may take on.
In the Principles and Parameters framework of Generative Grammar, the various positions occupied by the verb have been identified as functional heads hosting inflectional material (affixes or features), which may or may not attract the verb. This gave rise to a hypothesis, the Rich Agreement Hypothesis (RAH), according to which the verb has to move to the relevant functional head when the corresponding inflectional paradigm counts as “rich.”
The RAH is motivated by synchronic and diachronic variation among closely related languages (mostly of the Germanic family) suggesting a correspondence between verb movement and rich agreement. Research into this correspondence was initially marred by the absence of a fundamental definition of “richness” and by the observation of counterexamples, both synchronically (dialects not conforming to the pattern) and diachronically (a significant time gap between the erosion of verbal inflection and the disappearance of verb movement). Also, the research was based on a limited group of related languages and dialects. This led to the conclusion that there was at best a weak correlation between verb movement and richness of morphology.
Recently, the RAH has been revived in its strong form, proposing a fundamental definition of richness and testing the RAH against a typologically more diverse sample of the languages of the world. While this represents significant progress, several problems remain, with certain (current and past) varieties of North Germanic not conforming to the expected pattern, and the typological survey yielding mixed or unclear results. A further problem is that other Germanic languages (Dutch, German, Frisian) vary as to the richness of their morphology, but show identical verb placement patterns.
This state of affairs, especially in light of recent minimalist proposals relocating both inflectional morphology and verb movement outside syntax proper (to a component in the model of grammar interfacing between narrow syntax and phonetic realization), suggests that we need a more fundamental understanding of the relation between morphology and syntax before any relation between head movement and morphological strength can be reliably ascertained.