Mark de Vries
A relative clause is a clausal modifier that relates to a constituent of the sentence, typically a noun phrase. This is the antecedent or “head” of the relative construction. What makes the configuration special is that the subordinate clause contains a variable that is bound by the head. For instance, in the English sentence Peter recited a poem that Anne liked, the object of the embedded verb liked is relativized. In this example, the relative clause is a restrictive property, and the possible reference of a poem is narrowed to poems that Anne likes. However, it is also possible to construct a relative clause non-restrictively. If the example is changed to Peter recited this poem by Keats, which Anne likes, the relative clause provides additional information about the antecedent, and the internal variable, here spelled out by the relative pronoun which, is necessarily coreferential with the antecedent.
Almost all languages make use of (restrictive) relative constructions in one way or another. Various strategies of building relative clauses have been distinguished, which correlate at least partially with particular properties of languages, including word order patterns and the availability of certain pronouns. Relative clauses can follow or precede the head, or even include the head. Some languages make use of relative pronouns, while others use resumptive pronouns, or simply leave the relativized argument unpronounced in the subordinate clause. Furthermore, there is cross-linguistic variation in the range of syntactic functions that can be relativized. Notably, more than one type of relative clause can be present in one language. Special types of relative constructions include free relatives (with an implied pronominal antecedent), cleft constructions, and correlatives.
There is an extensive literature on the structural analysis of relative constructions. Questions that are debated include: How can different subtypes be distinguished? How does the internal variable relate to the antecedent? How can reconstruction and anti-reconstruction effects be explained? At what structural level is the relative clause attached to the antecedent or the matrix clause?
Veneeta Dayal and Deepak Alok
Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…tj…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.
Scrambling is one of the most widely discussed and prominent factors affecting word order variation in Korean. Scrambling in Korean exhibits various syntactic and semantic properties that cannot be subsumed under the standard A/A'-movement. Clause-external scrambling as well as clause-internal scrambling in Korean show mixed A/A'-effects in a range of tests such as anaphor binding, weak crossover, Condition C, negative polarity item licensing, wh-licensing, and scopal interpretation. VP-internal scrambling, by contrast, is known to be lack of reconstruction effects conforming to the claim that short scrambling is A-movement. Clausal scrambling, on the other hand, shows total reconstructions effects, unlike phrasal scrambling. The diverse properties of Korean scrambling have received extensive attention in the literature. Some studies argue that scrambling is a type of feature-driven A-movement with special reconstruction effects. Others argue that scrambling can be A-movement or A'-movement depending on the landing site. Yet others claim that scrambling is not standard A/A'-movement, but must be treated as cost-free movement with optional reconstruction effects. Each approach, however, faces non-trivial empirical and theoretical challenges, and further study is needed to understand the complex nature of scrambling. As the theory develops in the Minimalist Program, a variety of proposals have also been advanced to capture properties of scrambling without resorting to A/A'-distinctions.
Scrambling in Korean applies optionally but not randomly. It may be blocked due to various factors in syntax and its interfaces in the grammar. At the syntax proper, scrambling obeys general constraints on movement (e.g., island conditions, left branch condition, coordinate structure condition, proper binding condition, ban on string vacuous movement). Various semantic and pragmatic factors (e.g., specificity, presuppositionality, topic, focus) also play a crucial role in acceptability of sentences with scrambling. Moreover, current studies show that certain instances of scrambling are filtered out at the interface due to cyclic Spell-out and linearization, which strengthens the claim that scrambling is not a free option. Data from Korean pose important challenges against base-generation approaches to scrambling, and lend further credence to the view that scrambling is an instance of movement. The exact nature of scrambling in Korean—whether it is cost-free or feature-driven—must be further investigated in future research, however. The research on Korean scrambling leads us to the pursuit of a general theory, which covers obligatory A/A'-movement as well as optional displacement with mixed semantic effects in languages with free word order.
Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler
Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.
Beata Moskal and Peter W. Smith
Headedness is a pervasive phenomenon throughout different components of the grammar, which fundamentally encodes an asymmetry between two or more items, such that one is in some sense more important than the other(s). In phonology for instance, the nucleus is the head of the syllable, and not the onset or the coda, whereas in syntax, the verb is the head of a verb phrase, rather than any complements or specifiers that it combines with. It makes sense, then, to question whether the notion of headedness applies to the morphology as well; specifically, do words—complex or simplex—have heads that determine the properties of the word as a whole? Intuitively it makes sense that words have heads: a noun that is derived from an adjective like redness can function only as a noun, and the presence of red in the structure does not confer on the whole form the ability to function as an adjective as well.
However, this question is a complex one for a variety of reasons. While it seems clear for some phenomena such as category determination that words have heads, there is a lot of evidence to suggest that the properties of complex words are not all derived from one morpheme, but rather that the features are gathered from potentially numerous morphemes within the same word. Furthermore, properties that characterize heads compared to dependents, particularly based on syntactic behavior, do not unambigously pick out a single element, but the tests applied to morphology at times pick out affixes, and at times pick out bases as the head of the whole word.
Ljuba N. Veselinova
The term suppletion is used to indicate the unpredictable encoding of otherwise regular semantic or grammatical relations. Standard examples in English include the present and past tense of the verb go, cf. go vs. went, or the comparative and superlative forms of adjectives such as good or bad, cf. good vs. better vs. best, or bad vs. worse vs. worst.
The complementary distribution of different forms to express a paradigmatic contrast has been noticed already in early grammatical traditions. However, the idea that a special form would supply missing forms in a paradigm was first introduced by the neogrammarian Hermann Osthoff, in his work of 1899. The concept of suppletion was consolidated in modern linguistics by Leonard Bloomfield, in 1926. Since then, the notion has been applied to both affixes and stems. In addition to the application of the concept to linguistic units of varying morpho-syntactic status, such as affixes, or stems of different lexical classes such as, for instance, verbs, adjectives, or nouns, the student should also be prepared to encounter frequent discrepancies between uses of the concept in the theoretical literature and its application in more descriptively oriented work. There are models in which the term suppletion is restricted to exceptions to inflectional patterns only; consequently, exceptions to derivational patterns are not accepted as instantiations of the phenomenon. Thus, the comparative degrees of adjectives will be, at best, less prototypical examples of suppletion.
Treatments of the phenomenon vary widely, to the point of being complete opposites. A strong tendency exists to regard suppletion as an anomaly, a historical artifact, and generally of little theoretical interest. A countertendency is to view the phenomenon as challenging, but nonetheless very important for adequate theory formation. Finally, there are scholars who view suppletion as a functionally motivated result of language change.
For a long time, the database on suppletion, similarly to many other phenomena, was restricted to Indo-European languages. With the solidifying of wider cross-linguistic research and linguistic typology since the 1990s, the database on suppletion has been substantially extended. Large-scale cross-linguistic studies have shown that the phenomenon is observed in many different languages around the globe. In addition, it appears as a systematic cross-linguistic phenomenon in that it can be correlated with well-defined language areas, language families, specific lexemic groups, and specific slots in paradigms. The latter can be shown to follow general markedness universals. Finally, the lexemes that show suppletion tend to have special functions in both lexicon and grammar.
Ur Shlonsky and Giuliano Bocci
Syntactic cartography emerged in the 1990s as a result of the growing consensus in the field about the central role played by functional elements and by morphosyntactic features in syntax. The declared aim of this research direction is to draw maps of the structures of syntactic constituents, characterize their functional structure, and study the array and hierarchy of syntactically relevant features. Syntactic cartography has made significant empirical discoveries, and its methodology has been very influential in research in comparative syntax and morphosyntax. A central theme in current cartographic research concerns the source of the emerging featural/structural hierarchies. The idea that the functional hierarchy is not a primitive of Universal Grammar but derives from other principles does not undermine the scientific relevance of the study of the cartographic structures. On the contrary, the cartographic research aims at providing empirical evidence that may help answer these questions about the source of the hierarchy and shed light on how the computational principles and requirements of the interface with sound and meaning interact.
Syntactic features are formal properties of syntactic objects which determine how they behave with respect to syntactic constraints and operations (such as selection, licensing, agreement, and movement). Syntactic features can be contrasted with properties which are purely phonological, morphological, or semantic, but many features are relevant both to syntax and morphology, or to syntax and semantics, or to all three components.
The formal theory of syntactic features builds on the theory of phonological features, and normally takes morphosyntactic features (those expressed in morphology) to be the central case, with other, possibly more abstract features being modeled on the morphosyntactic ones.
Many aspects of the formal nature of syntactic features are currently unresolved. Some traditions (such as HPSG) make use of rich feature structures as an analytic tool, while others (such as Minimalism) pursue simplicity in feature structures in the interest of descriptive restrictiveness. Nevertheless, features are essential to all explicit analyses.
Heidi Harley and Shigeru Miyagawa
Ditransitive predicates select for two internal arguments, and hence minimally entail the participation of three entities in the event described by the verb. Canonical ditransitive verbs include give, show, and teach; in each case, the verb requires an agent (a giver, shower, or teacher, respectively), a theme (the thing given, shown, or taught), and a goal (the recipient, viewer, or student). The property of requiring two internal arguments makes ditransitive verbs syntactically unique. Selection in generative grammar is often modeled as syntactic sisterhood, so ditransitive verbs immediately raise the question of whether a verb may have two sisters, requiring a ternary-branching structure, or whether one of the two internal arguments is not in a sisterhood relation with the verb.
Another important property of English ditransitive constructions is the two syntactic structures associated with them. In the so-called “double object construction,” or DOC, the goal and theme both are simple NPs and appear following the verb in the order V-goal-theme. In the “dative construction,” the goal is a PP rather than an NP and follows the theme in the order V-theme-to goal. Many ditransitive verbs allow both structures (e.g., give John a book/give a book to John). Some verbs are restricted to appear only in one or the other (e.g. demonstrate a technique to the class/*demonstrate the class a technique; cost John $20/*cost $20 to John). For verbs which allow both structures, there can be slightly different interpretations available for each. Crosslinguistic results reveal that the underlying structural distinctions and their interpretive correlates are pervasive, even in the face of significant surface differences between languages. The detailed analysis of these questions has led to considerable progress in generative syntax. For example, the discovery of the hierarchical relationship between the first and second arguments of a ditransitive has been key in motivating the adoption of binary branching and the vP hypothesis. Many outstanding questions remain, however, and the syntactic encoding of ditransitivity continues to inform the development of grammatical theory.
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Erich R. Round
The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.
In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.
Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning.
In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.
Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-
Eystein Dahl and Antonio Fábregas
Zero or null morphology refers to morphological units that are devoid of phonological content. Whether such entities should be postulated is one of the most controversial issues in morphological theory, with disagreements in how the concept should be delimited, what would count as an instance of zero morphology inside a particular theory, and whether such objects should be allowed even as mere analytical instruments.
With respect to the first problem, given that zero morphology is a hypothesis that comes from certain analyses, delimiting what counts as a zero morpheme is not a trivial matter. The concept must be carefully differentiated from others that intuitively also involve situations where there is no overt morphological marking: cumulative morphology, phonological deletion, etc.
About the second issue, what counts as null can also depend on the specific theories where the proposal is made. In the strict sense, zero morphology involves a complete morphosyntactic representation that is associated to zero phonological content, but there are other notions of zero morphology that differ from the one discussed here, such as absolute absence of morphological expression, in addition to specific theory-internal interpretations of what counts as null. Thus, it is also important to consider the different ways in which something can be morphologically silent.
Finally, with respect to the third side of the debate, arguments are made for and against zero morphology, notably from the perspectives of falsifiability, acquisition, and psycholinguistics. Of particular impact is the question of which properties a theory should have in order to block the possibility that zero morphology exists, and conversely the properties that theories that accept zero morphology associate to null morphemes.
An important ingredient in this debate has to do with two empirical domains: zero derivation and paradigmatic uniformity. Ultimately, the plausibility that zero morphemes exist or not depends on the success at accounting for these two empirical patterns in a better way than theories that ban zero morphology.