Katie Wagner and David Barner
Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception.
One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.
Japanese is a language where the grammatical status of arguments and adjuncts is marked exclusively by postnominal case markers, and various argument realization patterns can be assessed by their case marking. Since Japanese is categorized as a language of the nominative-accusative type typologically, the unmarked case-marking frame obtained for transitive predicates of the non-stative (or eventive) type is ‘nominative-accusative’. Nevertheless, transitive predicates falling into the stative class often have other case-marking alignments, such as ‘nominative-nominative’ and ‘dative-nominative’. Consequently, Japanese provides much more varying argument realization patterns than those expected from its typological character as a nominative-accusative language.
In point of fact, argument marking can actually be much more elastic and variable, the variations being motivated by several linguistic factors. Arguments often have the option of receiving either syntactic or semantic case, with no difference in the logical or cognitive meaning (as in plural agent and source agent alternations) or depending on the meanings their predicate carry (as in locative alternation). The type of case marking that is not normally available in main clauses can sometimes be obtained in embedded contexts (i.e., in exceptional case marking and small-clause constructions). In complex predicates, including causative and indirect passive predicates, arguments are case-marked differently from their base clauses by virtue of suffixation, and their case patterns follow the mono-clausal case array, despite the fact that they have multi-clausal structures.
Various case marking options are also made available for arguments by grammatical operations. Some processes instantiate a change on the grammatical relations and case marking of arguments with no affixation or embedding. Japanese has the grammatical process of subjectivization, creating extra (non-thematic) major subjects, many of which are identified as instances of ‘possessor raising’ (or argument ascension). There is another type of grammatical process, which reduces the number of arguments by virtue of incorporating a noun into the predicate, as found in the light verb constructions with suru ‘do’ and the complex adjective constructions formed on the negative adjective nai ‘non-existent.’
Malka Rappaport Hovav
Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.
Blending is a type of word formation in which two or more words are merged into one so that the blended constituents are either clipped, or partially overlap. An example of a typical blend is brunch, in which the beginning of the word breakfast is joined with the ending of the word lunch. In many cases such as motel (motor + hotel) or blizzaster (blizzard + disaster) the constituents of a blend overlap at segments that are phonologically or graphically identical. In some blends, both constituents retain their form as a result of overlap, for example, stoption (stop + option). These examples illustrate only a handful of the variety of forms blends may take; more exotic examples include formations like Thankshallowistmas (Thanksgiving + Halloween + Christmas). The visual and audial amalgamation in blends is reflected on the semantic level. It is common to form blends meaning a combination or a product of two objects or phenomena, such as an animal breed (e.g., zorse, a breed of zebra and horse), an interlanguage variety (e.g., franglais, which is a French blend of français and anglais meaning a mixture of French and English languages), or other type of mix (e.g., a shress is a type of clothes having features of both a shirt and a dress).
Blending as a word formation process can be regarded as a subtype of compounding because, like compounds, blends are formed of two (or sometimes more) content words and semantically either are hyponyms of one of their constituents, or exhibit some kind of paradigmatic relationships between the constituents. In contrast to compounds, however, the formation of blends is restricted by a number of phonological constraints given that the resulting formation is a single word. In particular, blends tend to be of the same length as the longest of their constituent words, and to preserve the main stress of one of their constituents. Certain regularities are also observed in terms of ordering of the words in a blend (e.g., shorter first, more frequent first), and in the position of the switch point, that is, where one blended word is cut off and switched to another (typically at the syllable boundary or at the onset/rime boundary). The regularities of blend formation can be related to the recognizability of the blended words.
Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.
Haihua Pan and Yuli Feng
Cross-linguistic data can add new insights to the development of semantic theories or even induce the shift of the research paradigm. The major topics in semantic studies such as bare noun denotation, quantification, degree semantics, polarity items, donkey anaphora and binding principles, long-distance reflexives, negation, tense and aspects, eventuality are all discussed by semanticists working on the Chinese language. The issues which are of particular interest include and are not limited to: (i) the denotation of Chinese bare nouns; (ii) categorization and quantificational mapping strategies of Chinese quantifier expressions (i.e., whether the behaviors of Chinese quantifier expressions fit into the dichotomy of A-Quantification and D-quantification); (iii) multiple uses of quantifier expressions (e.g., dou) and their implication on the inter-relation of semantic concepts like distributivity, scalarity, exclusiveness, exhaustivity, maximality, etc.; (iv) the interaction among universal adverbials and that between universal adverbials and various types of noun phrases, which may pose a challenge to the Principle of Compositionality; (v) the semantics of degree expressions in Chinese; (vi) the non-interrogative uses of wh-phrases in Chinese and their influence on the theories of polarity items, free choice items, and epistemic indefinites; (vii) how the concepts of E-type pronouns and D-type pronouns are manifested in the Chinese language and whether such pronoun interpretations correspond to specific sentence types; (viii) what devices Chinese adopts to locate time (i.e., does tense interpretation correspond to certain syntactic projections or it is solely determined by semantic information and pragmatic reasoning); (ix) how the interpretation of Chinese aspect markers can be captured by event structures, possible world semantics, and quantification; (x) how the long-distance binding of Chinese ziji ‘self’ and the blocking effect by first and second person pronouns can be accounted for by the existing theories of beliefs, attitude reports, and logophoricity; (xi) the distribution of various negation markers and their correspondence to the semantic properties of predicates with which they are combined; and (xii) whether Chinese topic-comment structures are constrained by both semantic and pragmatic factors or syntactic factors only.
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.
There are two main theoretical traditions in semantics. One is based on realism, where meanings are described as relations between language and the world, often in terms of truth conditions. The other is cognitivistic, where meanings are identified with mental structures. This article presents some of the main ideas and theories within the cognitivist approach.
A central tenet of cognitively oriented theories of meaning is that there are close connections between the meaning structures and other cognitive processes. In particular, parallels between semantics and visual processes have been studied. As a complement, the theory of embodied cognition focuses on the relation between actions and components of meaning.
One of the main methods of representing cognitive meaning structures is to use images schemas and idealized cognitive models. Such schemas focus on spatial relations between various semantic elements. Images schemas are often constructed using Gestalt psychological notions, including those of trajector and landmark, corresponding to figure and ground. In this tradition, metaphors and metonymies are considered to be central meaning transforming processes.
A related approach is force dynamics. Here, the semantic schemas are construed from forces and their relations rather than from spatial relations. Recent extensions involve cognitive representations of actions and events, which then form the basis for a semantics of verbs.
A third approach is the theory of conceptual spaces. In this theory, meanings are represented as regions of semantic domains such as space, time, color, weight, size, and shape. For example, strong evidence exists that color words in a large variety of languages correspond to such regions. This approach has been extended to a general account of the semantics of some of the main word classes, including adjectives, verbs, and prepositions. The theory of conceptual spaces shows similarities to the older frame semantics and feature analysis, but it puts more emphasis on geometric structures.
A general criticism against cognitive theories of semantics is that they only consider the meaning structures of individuals, but neglect the social aspects of semantics, that is, that meanings are shared within a community. Recent theoretical proposals counter this by suggesting that semantics should be seen as a meeting of minds, that is, communicative processes that lead to the alignment of meanings between individuals. On this approach, semantics is seen as a product of communication, constrained by the cognitive mechanisms of the individuals.
Modification is a combinatorial semantic operation between a modifier and a modifiee. Take, for example, vegetarian soup: the attributive adjective vegetarian modifies the nominal modifiee soup and thus constrains the range of potential referents of the complex expression to soups that are vegetarian. Similarly, in Ben is preparing a soup in the camper, the adverbial in the camper modifies the preparation by locating it. Notably, modifiers can have fairly drastic effects; in fake stove, the attribute fake induces that the complex expression singles out objects that seem to be stoves, but are not. Intuitively, modifiers contribute additional information that is not explicitly called for by the target the modifier relates to. Speaking in terms of logic, this roughly says that modification is an endotypical operation; that is, it does not change the arity, or logical type, of the modified target constituent. Speaking in terms of syntax, this predicts that modifiers are typically adjuncts and thus do not change the syntactic distribution of their respective target; therefore, modifiers can be easily iterated (see, for instance, spicy vegetarian soup or Ben prepared a soup in the camper yesterday). This initial characterization sets modification apart from other combinatorial operations such as argument satisfaction and quantification: combining a soup with prepare satisfies an argument slot of the verbal head and thus reduces its arity (see, for instance, *prepare a soup a quiche). Quantification as, for example, in the combination of the quantifier every with the noun soup, maps a nominal property onto a quantifying expression with a different distribution (see, for instance, *a every soup). Their comparatively loose connection to their hosts renders modifiers a flexible, though certainly not random, means within combinatorial meaning constitution. The foundational question is how to work their being endotypical into a full-fledged compositional analysis. On the one hand, modifiers can be considered endotypical functors by virtue of their lexical endowment; for instance, vegetarian would be born a higher-ordered function from predicates to predicates. On the other hand, modification can be considered a rule-based operation; for instance, vegetarian would denote a simple predicate from entities to truth-values that receives its modifying endotypical function only by virtue of a separate modification rule. In order to assess this and related controversies empirically, research on modification pays particular attention to interface questions such as the following: how do structural conditions and the modifying function conspire in establishing complex interpretations? What roles do ontological information and fine-grained conceptual knowledge play in the course of concept combination?
Compound and complex predicates—predicates that consist of two or more lexical items and function as the predicate of a single sentence—present an important class of linguistic objects that pertain to an enormously wide range of issues in the interactions of morphology, phonology, syntax, and semantics. Japanese makes extensive use of compounding to expand a single verb into a complex one. These compounding processes range over multiple modules of the grammatical system, thus straddling the borders between morphology, syntax, phonology, and semantics. In terms of degree of phonological integration, two types of compound predicates can be distinguished. In the first type, called tight compound predicates, two elements from the native lexical stratum are tightly fused and inflect as a whole for tense. In this group, Verb-Verb compound verbs such as arai-nagasu [wash-let.flow] ‘to wash away’ and hare-agaru [sky.be.clear-go.up] ‘for the sky to clear up entirely’ are preponderant in numbers and productivity over Noun-Verb compound verbs such as tema-doru [time-take] ‘to take a lot of time (to finish).’
The second type, called loose compound predicates, takes the form of “Noun + Predicate (Verbal Noun [VN] or Adjectival Noun [AN]),” as in post-syntactic compounds like [sinsya : koonyuu] no okyakusama ([new.car : purchase] GEN customers) ‘customer(s) who purchase(d) a new car,’ where the symbol “:” stands for a short phonological break. Remarkably, loose compounding allows combinations of a transitive VN with its agent subject (external argument), as in [Supirubaagu : seisaku] no eiga ([Spielberg : produce] GEN film) ‘a film/films that Spielberg produces/produced’—a pattern that is illegitimate in tight compounds and has in fact been considered universally impossible in the world’s languages in verbal compounding and noun incorporation.
In addition to a huge variety of tight and loose compound predicates, Japanese has an additional class of syntactic constructions that as a whole function as complex predicates. Typical examples are the light verb construction, where a clause headed by a VN is followed by the light verb suru ‘do,’ as in Tomodati wa sinsya o koonyuu (sae) sita [friend TOP new.car ACC purchase (even) did] ‘My friend (even) bought a new car’ and the human physical attribute construction, as in Sensei wa aoi me o site-iru [teacher TOP blue eye ACC do-ing] ‘My teacher has blue eyes.’ In these constructions, the nominal phrases immediately preceding the verb suru are semantically characterized as indefinite and non-referential and reject syntactic operations such as movement and deletion. The semantic indefiniteness and syntactic immobility of the NPs involved are also observed with a construction composed of a human subject and the verb aru ‘be,’ as Gakkai ni wa oozei no sankasya ga atta ‘There was a large number of participants at the conference.’ The constellation of such “word-like” properties shared by these compound and complex predicates poses challenging problems for current theories of morphology-syntax-semantics interactions with regard to such topics as lexical integrity, morphological compounding, syntactic incorporation, semantic incorporation, pseudo-incorporation, and indefinite/non-referential NPs.
Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data.
The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique.
A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem.
Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers.
Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data.
It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Conversational implicatures (i) are implied by the speaker in making an utterance; (ii) are part of the content of the utterance, but (iii) do not contribute to direct (or explicit) utterance content; and (iv) are not encoded by the linguistic meaning of what has been uttered. In (1), Amelia asserts that she is on a diet, and implicates something different: that she is not having cake.
(1)Benjamin:Are you having some of this chocolate cake?Amelia:I’m on a diet.
Conversational implicatures are a subset of the implications of an utterance: namely those that are part of utterance content. Within the class of conversational implicatures, there are distinctions between particularized and generalized implicatures; implicated premises and implicated conclusions; and weak and strong implicatures.
An obvious question is how implicatures are possible: how can a speaker intentionally imply something that is not part of the linguistic meaning of the phrase she utters, and how can her addressee recover that utterance content? Working out what has been implicated is not a matter of deduction, but of inference to the best explanation. What is to be explained is why the speaker has uttered the words that she did, in the way and in the circumstances that she did.
Grice proposed that rational talk exchanges are cooperative and are therefore governed by a Cooperative Principle (CP) and conversational maxims: hearers can reasonably assume that rational speakers will attempt to cooperate and that rational cooperative speakers will try to make their contribution truthful, informative, relevant and clear, inter alia, and these expectations therefore guide the interpretation of utterances. On his view, since addressees can infer implicatures, speakers can take advantage of their ability, conveying implicatures by exploiting the maxims.
Grice’s theory aimed to show how implicatures could in principle arise. In contrast, work in linguistic pragmatics has attempted to model their actual derivation. Given the need for a cognitively tractable decision procedure, both the neo-Gricean school and work on communication in relevance theory propose a system with fewer principles than Grice’s. Neo-Gricean work attempts to reduce Grice’s array of maxims to just two (Horn) or three (Levinson), while Sperber and Wilson’s relevance theory rejects maxims and the CP and proposes that pragmatic inference hinges on a single communicative principle of relevance.
Conversational implicatures typically have a number of interesting properties, including calculability, cancelability, nondetachability, and indeterminacy. These properties can be used to investigate whether a putative implicature is correctly identified as such, although none of them provides a fail-safe test. A further test, embedding, has also been prominent in work on implicatures.
A number of phenomena that Grice treated as implicatures would now be treated by many as pragmatic enrichment contributing to the proposition expressed. But Grice’s postulation of implicatures was a crucial advance, both for its theoretical unification of apparently diverse types of utterance content and for the attention it drew to pragmatic inference and the division of labor between linguistic semantics and pragmatics in theorizing about verbal communication.
The term coordination refers to the juxtaposition of two or more conjuncts often linked by a conjunction such as and or or. The conjuncts (e.g., our friend and your teacher in Our friend and your teacher sent greetings) may be words or phrases of any type. They are a defining property of coordination, while the presence or absence of a conjunction depends on the specifics of the particular language. As a general phenomenon, coordination differs from subordination in that the conjuncts are typically symmetric in many ways: they often belong to like syntactic categories, and if nominal, each carries the same case. Additionally, if there is extraction, this must typically be out of all conjuncts in parallel, a phenomenon known as Across-the-Board extraction. Extraction of a single conjunct, or out of a single conjunct, is prohibited by the Coordinate Structure Constraint. Despite this overall symmetry, coordination does sometimes behave in an asymmetric fashion. Under certain circumstances, the conjuncts may be of unlike categories or extraction may occur out of one conjunct, but not another, thus yielding apparent violations of the Coordinate Structure Constraint. In addition, case and agreement show a wide range of complex and sometimes asymmetric behavior cross-linguistically. This tension between the symmetric and asymmetric properties of coordination is one of the reasons that coordination has remained an interesting analytical puzzle for many decades.
Within the general area of coordination, a number of specific sentence types have generated much interest. One is Gapping, in which two sentences are conjoined, but material (often the verb) is missing from the middle of the second conjunct, as in Mary ate beans and John _ potatoes. Another is Right Node Raising, in which shared material from the right edge of sentential conjuncts is placed in the right periphery of the entire sentence, as in The chefs prepared __ and the customers ate __ [a very elaborately constructed dessert]. Finally, some languages have a phenomenon known as comitative coordination, in which a verb has two arguments, one morphologically plural and the other comitative (e.g., with the preposition with), but the plural argument may be understood as singular. English does not have this phenomenon, but if it did, a sentence like We went to the movies with John could be understood as John and I went to the movies.
Marcel den Dikken and Teresa O’Neill
Copular sentences (sentences of the form A is B) have been prominent on the research agenda for linguists and philosophers of language since classical antiquity, and continue to be shrouded in considerable controversy. Central questions in the linguistic literature on copulas and copular sentences are (a) whether predicational, specificational, identificational, and equative copular sentences have a common underlying source; and, if so, (b) how the various surface types of copular sentences are derived from that underlier; (c) whether there is a typology of copulas; and (d) whether copulas are meaningful or meaningless.
The debate surrounding the postulation of multiple copular sentence types relies on criteria related to both meaning and form. Analyses based on meaning tend to focus on the question of whether or not one of the terms is a predicate of the other, whether or not the copula contributes meaning, and the information-structural properties of the construction. Analyses based on form focus on the flexibility of the linear ordering of the two terms of the construction, the surface distribution of the copular element, the restrictions imposed on the extraction of the two terms, the case and agreement properties of the construction, the omissibility of the copula or one of the two terms, and the connectivity effects exhibited by the construction.
Morphosyntactic variation in the domain of copular elements is an area of research with fruitful intersections between typological and generative approaches. A variety of criteria are presented in the literature to justify the postulation of multiple copulas or underlying representations for copular sentences. Another prolific body of research concerns the semantics of copular sentences. In the assessment of scholarship on copulas and copular sentences, the article critiques the ‘multiple copulas’ approach and examines ways in which the surface variety of copular sentence types can be accounted for in a ‘single copula’ analysis. The analysis of copular constructions continues to have far-reaching consequences in the context of linguistic theory construction, particularly the question of how a predicate combines with its subject in syntactic structure.
Denominal verbs are verbs formed from nouns by means of various word-formation processes such as derivation, conversion, or less common mechanisms like reduplication, change of pitch, or root and pattern. Because their well-formedness is determined by morphosyntactic, phonological, and semantic constraints, they have been analyzed from a variety of lexicalist and non-lexicalist perspectives, including Optimality Theory, Lexical Semantics, Cognitive Grammar, Onomasiology, and Neo-Construction Grammar. Independently of their structural shape, denominal verbs have in common that they denote events in which the referents of their base nouns (e.g., computer in the case of computerize) participate in a non-arbitrary way. While traditional labels like ‘ornative’, ‘privative’, ‘locative’, ‘instrumental’ and the like allow for a preliminary classification of denominal verbs, a more formal description has to account for at least three basic aspects, namely (1) competition among functionally similar word-formation patterns, (2) the polysemy of affixes, which precludes a neat one-to-one relation between derivatives displaying a particular affix and a particular semantic class, and (3) the relevance of generic knowledge and contextual information for the interpretation of (innovative) denominal verbs.
Željko Bošković and Troy Messick
Economy considerations have always played an important role in the generative theory of grammar. They are particularly prominent in the most recent instantiation of this approach, the Minimalist Program, which explores the possibility that Universal Grammar is an optimal way of satisfying requirements that are imposed on the language faculty by the external systems that interface with the language faculty which is also characterized by optimal, computationally efficient design. In this respect, the operations of the computational system that produce linguistic expressions must be optimal in that they must satisfy general considerations of simplicity and efficient design. Simply put, the guiding principles here are (a) do something only if you need to and (b) if you do need to, do it in the most economical/efficient way. These considerations ban superfluous steps in derivations and superfluous symbols in representations. Under economy guidelines, movement takes place only when there is a need for it (with both syntactic and semantic considerations playing a role here), and when it does take place, it takes place in the most economical way: it is as short as possible and carries as little material as possible. Furthermore, economy is evaluated locally, on the basis of immediately available structure. The locality of syntactic dependencies is also enforced by minimal search and by limiting the number of syntactic objects and the amount of structure accessible in the derivation. This is achieved by transferring parts of syntactic structure to the interfaces during the derivation, the transferred parts not being accessible for further syntactic operations.
Derivational morphology is a type of word formation that creates new lexemes, either by changing syntactic category or by adding substantial new meaning (or both) to a free or bound base. Derivation may be contrasted with inflection on the one hand or with compounding on the other. The distinctions between derivation and inflection and between derivation and compounding, however, are not always clear-cut. New words may be derived by a variety of formal means including affixation, reduplication, internal modification of various sorts, subtraction, and conversion. Affixation is best attested cross-linguistically, especially prefixation and suffixation. Reduplication is also widely found, with various internal changes like ablaut and root and pattern derivation less common. Derived words may fit into a number of semantic categories. For nouns, event and result, personal and participant, collective and abstract noun are frequent. For verbs, causative and applicative categories are well-attested, as are relational and qualitative derivations for adjectives. Languages frequently also have ways of deriving negatives, relational words, and evaluatives. Most languages have derivation of some sort, although there are languages that rely more heavily on compounding than on derivation to build their lexical stock. A number of topics have dominated the theoretical literature on derivation, including productivity (the extent to which new words can be created with a given affix or morphological process), the principles that determine the ordering of affixes, and the place of derivational morphology with respect to other components of the grammar. The study of derivation has also been important in a number of psycholinguistic debates concerning the perception and production of language.
Determiners are a nominal syntactic category distinct from both adjectives and nouns; they constitute a functional (aka closed or ‘minor’) category and they are typically located high inside the nominal phrasal structure.
From a syntactic point of view, the category of determiners is commonly understood to comprise the word classes of article, demonstrative, and quantifier, as well as non-adjectival possessives and some nominal agreement markers.
From a semantic point of view, determiners are assumed to function as quantifiers, especially within research informed by Generalized Quantifier Theory. However, this is a one-way entailment: although determiners in natural language are quantificational, their class contains only a subset of the logically possible quantifiers; this class is restricted by conservativity and other factors.
The tension between the ‘syntactic’ and the ‘semantic’ perspective on determiners results to a degree of terminological confusion: it is not always clear which lexical items the Determiner category includes or what the function of determiners is; moreover, there exists a tendency among syntacticians to view ‘Determiner’ as naming not a class, but a fixed position within a nominal phrasal template.
The study of determiners rose to prominence within grammatical theory during the ’80s both due to advances in semantic theorizing, primarily Generalized Quantifier Theory, and due to the generalization of the X' phrasal schema to functional (minor) categories. Some issues in the nature and function of determiners that have been addressed in theoretical and typological work with considerable success include the categorial status of determiners, their (non-)universality, their structural position and feature makeup, their role in argumenthood and their interaction with nominal predicates, and their relation to pronouns. Expectedly, issues in (in)definiteness, quantification, and specificity also figure prominently in research work on determiners.
Displacement is a ubiquitous phenomenon in natural languages. Grammarians often speak of displacement in cases where the rules for the canonical word order of a language lead to the expectation of finding a word or phrase in a particular position in the sentence whereas it surfaces instead in a different position and the canonical position remains empty: ‘Which book did you buy?’ is an example of displacement because the noun phrase ‘which book’, which acts as the grammatical object in the question, does not occur in the canonical object position, which in English is after the verb. Instead, it surfaces at the beginning of the sentence and the object position remains empty. Displacement is often used as a diagnostic for constituent structure because it affects only (but not all) constituents. In the clear cases, displaced constituents show properties associated with two distinct linear and hierarchical positions. Typically, one of these two positions c-commands the other and the displaced element is pronounced in the c-commanding position. Displacement also shows strong interactions with the path between the empty canonical position and the position where the element is pronounced: one often encounters morphological changes along this path and evidence for structural placement of the displaced constituent, as well as constraints on displacement induced by the path.
The exact scope of displacement as an analytically unified phenomenon varies from theory to theory. If more then one type of syntactic displacement is recognized, the question of the interaction between movement types arises. Displacement phenomena are extensively studied by syntacticians. Their enduring interest derives from the fact that the complex interactions between displacement and other aspects of syntax offer a powerful probe into the inner workings and architecture of the human syntactic faculty.