61-80 of 84 Results  for:

Clear all

Article

This article is devoted to the description of perfect tenses in Romance. Perfects can be described as verbal forms which place events in the past with respect to some point of reference, and indicate that the event has some special relevance at the point of reference ; in that, they are opposed to past tenses, which localize an event in the past with respect to the moment of utterance. Romance is an interesting language family with respect to perfect tenses, because it features a set of closely related constructions, descending almost all from the same diachronic source yet differing in interesting ways among each other. Romance also provides us with a lesson in the difficulty of clearly pinning down and stating a single, obvious and generally agreed upon criterion of defining a perfect.

Article

The category of Personal/Participant/Inhabitant derived nouns comprises a conglomeration of derived nouns that denote among others agents, instruments, patients/themes, inhabitants, and followers of a person. Based on the thematic relations between the derived noun and its base lexeme, Personal/Participant/Inhabitant nouns can be classified into two subclasses. The first subclass comprises derived nouns that are deverbal and carry thematic readings (e.g., driver). The second subclass consists of derived nouns with athematic readings (e.g., Marxist). The examination of the category of Personal/Participant/Inhabitant nouns allows one to delve deeply into the study of multiplicity of meaning in word formation and the factors that bear on the readings of derived words. These factors range from the historical mechanisms that lead to multiplicity of meaning and the lexical-semantic properties of the bases that derived nouns are based on, to the syntactic context into which derived nouns occur, and the pragmatic-encyclopedic facets of both the base and the derived lexeme.

Article

This paper provides an overview of polarity phenomena in human languages. There are three prominent paradigms of polarity items: negative polarity items (NPIs), positive polarity items (PPIs), and free choice items (FCIs). What they all have in common is that they have limited distribution: they cannot occur just anywhere, but only inside the scope of licenser, which is negation and more broadly a nonveridical licenser, PPIs, conversely, must appear outside the scope of negation. The need to be in the scope of a licenser creates a semantic and syntactic dependency, as the polarity item must be c-commanded by the licenser at some syntactic level. Polarity, therefore, is a true interface phenomenon and raises the question of well-formedness that depends on both semantics and syntax. Nonveridical polarity contexts can be negative, but also non-monotonic such as modal contexts, questions, other non-assertive contexts (imperatives, subjunctives), generic and habitual sentences, and disjunction. Some NPIs and FCIs appear freely in these contexts in many languages, and some NPIs prefer negative contexts. Within negative licensers, we make a distinction between classically and minimally negative contexts. There are no NPIs that appear only in minimally negative contexts. The distributions of NPIs and FCIs crosslinguistically can be understood in terms of general patterns, and there are individual differences due largely to the lexical semantic content of the polarity item paradigms. Three general patterns can be identified as possible lexical sources of polarity. The first is the presence of a dependent variable in the polarity item—a property characterizing NPIs and FCIs in many languages, including Greek, Mandarin, and Korean. Secondly, the polarity item may be scalar: English any and FCIs can be scalar, but Greek, Korean, and Mandarin NPIs are not. Finally, it has been proposed that NPIs can be exhaustive, but exhaustivity is hard to precisely identify in a non-stipulative way, and does not characterize all NPIs. NPIs that are not exhaustive tend to be referentially vague, which means that the speaker uses them only if she is unable to identify a specific referent for them.

Article

Agustín Vicente and Ingrid L. Falkum

Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide. Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.

Article

Salvador Valera

Polysemy and homonymy are traditionally described in the context of paradigmatic lexical relations. Unlike monosemy, in which one meaning is associated with one form, and unlike synonymy, in which one meaning is associated with several forms, in polysemy and homonymy several meanings are associated with one form. The classical view of polysemy and homonymy is as a binary opposition whereby the various meanings of one form are described either as within one word (polysemy) or as within as many words as meanings (homonymy). In this approach, the decision is made according to whether the meanings can be related to one or two different sources. This classical view does not prevail in the literature as it did in the past. The most extreme revisions have questioned the descriptive synchronic difference between polysemy and homonymy or have subsumed the separation as under a general use of one term (homophony) and then established distinctions within, according to meaning and distribution. A more widespread reinterpretation of the classical opposition is in terms of a gradient where polysemy and homonymy arrange themselves along a continuum. Such a gradient arranges formally identical units at different points according to their degree of semantic proximity and degree of entrenchment (the latter understood as the degree to which a form recalls a semantic content and is activated in a speaker’s mind). The granularity of this type of gradient varies according to specific proposals, but, in the essential, the representation ranges from most and clearest proximity as well as highest degree of entrenchment in polysemy to least and most obscure proximity and lowest degree of entrenchment in homonymy.

Article

Gianina Iordăchioaia

In linguistics, the study of quantity is concerned with the behavior of expressions that refer to amounts in terms of the internal structure of objects and events, their spatial or temporal extension (as duration and boundedness), their qualifying properties, as well as how these aspects interact with each other and other linguistic phenomena. Quantity is primarily manifest in language for the lexical categories of noun, verb, and adjective/ adverb. For instance, the distinction between mass and count nouns is essentially quantitative: it indicates how nominal denotation is quantized—as substance (e.g., water, sand) or as an atomic individual (e.g., book, boy). Similarly, the aspectual classes of verbs, such as states (know), activities (run), accomplishments (drown), achievements (notice), and semelfactives (knock) represent quantitatively different types of events. Adjectives and adverbs may lexically express quantities in relation to individuals, respectively, events (e.g., little, enough, much, often), and one might argue that numerals (two, twenty) are intrinsic quantitative expressions. Quantitative derivation refers to the use of derivational affixes to encode quantity in language. For instance, the English suffix -ful attaches to a noun N1 to derive another noun N2, such that N2 denotes the quantity that fits in the container denoted by N1. N2 also employs a special use in quantitative constructions: see hand—a handful of berries. The challenge for the linguistic description of quantity is that it often combines with other linguistic notions such as evaluation, intensification, quality, and it does not have a specific unitary realization—it is usually auxiliary on other more established notions. Quantitative affixes either have limited productivity or their primary use is for other semantic notions. For instance, the German suffix ‑schaft typically forms abstract nouns as in Vaterschaft ‘fatherhood’, but has a (quantity-related) collective meaning in Lehrerschaft ‘lecturer staff’; compare English -hood in childhood and the collective neighborhood. This diversity makes quantity difficult to capture systematically, in spite of its pervasiveness as a semantic notion.

Article

Deirdre Wilson

Relevance theory is a cognitive approach to pragmatics which starts from two broadly Gricean assumptions: (a) that much human communication, both verbal and non-verbal, involves the overt expression and inferential recognition of intentions, and (b) that in inferring these intentions, the addressee presumes that the communicator’s behavior will meet certain standards, which for Grice are based on a Cooperative Principle and maxims, and for relevance theory are derived from the assumption that, as a result of constant selection pressures in the course of human evolution, both cognition and communication are relevance-oriented. Relevance is defined in terms of cognitive (or contextual) effects and processing effort: other things being equal, the greater the cognitive effects and the smaller the processing effort, the greater the relevance. A long-standing aim of relevance theory has been to show that building an adequate theory of communication involves going beyond Grice’s notion of speaker’s meaning. Another is to provide a conceptually unified account of how a much broader variety of communicative acts than Grice was concerned with—including cases of both showing that and telling that—are understood. The resulting pragmatic theory differs from Grice’s in several respects. It sees explicit communication as much richer and more inferential than Grice thought, with encoded sentence meanings providing no more than clues to the speaker’s intentions. It rejects the close link that Grice saw between implicit communication and (real or apparent) maxim violation, showing in particular how figurative utterances might arise naturally and spontaneously in the course of communication. It offers an account of vagueness or indeterminacy in communication, which is often abstracted away from in more formally oriented frameworks. It investigates the role of context in comprehension, and shows how tentative hypotheses about the intended combination of explicit content, contextual assumptions, and implicatures might be refined and mutually adjusted in the course of the comprehension process in order to satisfy expectations of relevance. Relevance theory treats the borderline between semantics and pragmatics as co-extensive with the borderline between (linguistic) decoding and (pragmatic) inference. It sees encoded sentence meanings as typically fragmentary and incomplete, and as having to undergo inferential enrichment or elaboration in order to yield fully propositional forms. It reanalyzes Grice’s conventional implicatures—which he saw as semantic but non-truth-conditional aspects of the meaning of words like but and so—as encoding procedural information with dedicated pragmatic or more broadly cognitive functions, and extends the notion of procedural meaning to a range of further items such as pronouns, discourse particles, mood indicators, and affective intonation.

Article

Veneeta Dayal and Deepak Alok

Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…t­j­…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.

Article

A secondary predicate is a nonverbal predicate which is typically optional and which shares its argument with the sentence’s main verb (e.g., cansada ‘tired’ in Portuguese Ela chega cansada ‘She arrives tired’). A basic distinction within the class of adjunct secondary predicates is that between depictives and resultatives. Depictives, such as cansada in the Portuguese example, describe the state of an argument during the event denoted by the verb. Typically, Romance depictives morphologically agree with their argument in gender and number (as in the case of cansada). Resultatives, such as flat in John hammered the metal flat, describe the state of an argument which results from the event denoted by the verb. Resultatives come in different types, and the strong resultatives, such as flat in the English example, are missing in Romance languages. Although strong resultatives are missing, Romance languages possess other constructions which express a sense of resultativity: spurious resultatives, where the verb and the resultative predicate are linked because the manner of carrying out the action denoted by the verb leads to a particular resultant state (e.g., Italian Mia figlia ha cucito la gonna troppo stretta ‘My daughter sewed the skirt too tight’), and to a much lesser extent weak resultatives, where the meaning of the verb and the meaning of the resultative predicate are related (the resultative predicate specifies a state that is already contained in the verb’s meaning, e.g., French Marie s’est teint les cheveux noirs ‘Marie dyed her hair black’). In Romance languages the distinction between participant-oriented secondary predicates and event-oriented adjectival adverbs is not always clear. On the formal side, the distinction is blurred when (a) adjectival adverbs exhibit morphological agreement (despite their event orientation) or (b) secondary predicates do not agree with the argument they predicate over. On the semantic side, one and the same string may be open to interpretation as a secondary predicate or as an adjectival adverb (e.g., Spanish Pedro gritó colérico ‘Pedro screamed furious/furiously’).

Article

Elizabeth Closs Traugott

Traditional approaches to semantic change typically focus on outcomes of meaning change and list types of change such as metaphoric and metonymic extension, broadening and narrowing, and the development of positive and negative meanings. Examples are usually considered out of context, and are lexical members of nominal and adjectival word classes. However, language is a communicative activity that is highly dependent on context, whether that of the ongoing discourse or of social and ideological changes. Much recent work on semantic change has focused, not on results of change, but on pragmatic enabling factors for change in the flow of speech. Attention has been paid to the contributions of cognitive processes, such as analogical thinking, production of cues as to how a message is to be interpreted, and perception or interpretation of meaning, especially in grammaticalization. Mechanisms of change such as metaphorization, metonymization, and subjectification have been among topics of special interest and debate. The work has been enabled by the fine-grained approach to contextual data that electronic corpora allow.

Article

Francis Jeffry Pelletier

Most linguists have heard of semantic compositionality. Some will have heard that it is the fundamental truth of semantics. Others will have been told that it is so thoroughly and completely wrong that it is astonishing that it is still being taught. The present article attempts to explain all this. Much of the discussion of semantic compositionality takes place in three arenas that are rather insulated from one another: (a) philosophy of mind and language, (b) formal semantics, and (c) cognitive linguistics and cognitive psychology. A truly comprehensive overview of the writings in all these areas is not possible here. However, this article does discuss some of the work that occurs in each of these areas. A bibliography of general works, and some Internet resources, will help guide the reader to some further, undiscussed works (including further material in all three categories).

Article

Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler

Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.

Article

Floris Roelofsen

This survey article discusses two basic issues that semantic theories of questions face. The first is how to conceptualize and formally represent the semantic content of questions. This issue arises in particular because the standard truth-conditional notion of meaning, which has been fruitful in the analysis of declarative statements, is not applicable to questions. This is because questions are not naturally construed as being true or false. Instead, it has been proposed that the semantic content of a question must be characterized in terms of its answerhood or resolution conditions. This article surveys a number of theories which develop this basic idea in different ways, focusing on so-called proposition-set theories (alternative semantics, partition semantics, and inquisitive semantics). The second issue that will be considered here concerns questions that are embedded within larger sentences. Within this domain, one important puzzle is why certain predicates can take both declarative and interrogative complements (e.g., Bill knows that Mary called / Bill knows who called), while others take only declarative complements (e.g., Bill thinks that Mary called / *Bill thinks who called) or only interrogative complements (e.g., Bill wonders who called / *Bill wonders that Mary called). We compare two general approaches that have been pursued in the literature. One assumes that declarative and interrogative complements differ in semantic type. On this approach, the fact that predicates like think do not take interrogative complements can be accounted for by assuming that such complements do not have the semantic type that think selects for. The other approach treats the two kinds of complement as having the same semantic type, and seeks to connect the selectional restrictions of predicates like think to other semantic properties (e.g., the fact that think is neg-raising).

Article

Miguel Casas Gómez and Martin Hummel

Structural semantics is a primarily European structural linguistic approach to the content level of language which basically derives from two historical sources. The main inspiration stems from Ferdinand de Saussure’s Cours de linguistique générale (1916), where the Genevan linguist also formulates the fundamental principles of semantic analysis: the twofold character of the linguistic sign, the inner determination of its content by the—allegedly autonomous—linguistic system, the consequent exclusion of the extralinguistic reality, the notion of opposition inside the system, and the concept of “associative relations” in the domain of semantics. This tradition was later refined by Hjelmslev and Coseriu, who introduced theoretical and methodological strength and rigor, suggesting systematic analyses in terms of semantic features linked by (binary) opposition. The second source of inspiration was the more holistic concept elaborated by Wilhelm von Humboldt, who saw language as a means of structuring the world. In the second half of the 20th century, structural semantics was mainstream semantics (to the extent that semantic analysis was accepted at all). A long series of authors deepened these historical traditions in theoretical and empirical studies, some of them suggesting secondary and/or partial models. Finally, prototype semantics and cognitive semantics strove to downgrade structural semantics by turning back to a more holistic conception of meaning including the speakers’ knowledge of the world, although not without introducing the alternative structural notion of “network.”

Article

Peter Svenonius

Syntactic features are formal properties of syntactic objects which determine how they behave with respect to syntactic constraints and operations (such as selection, licensing, agreement, and movement). Syntactic features can be contrasted with properties which are purely phonological, morphological, or semantic, but many features are relevant both to syntax and morphology, or to syntax and semantics, or to all three components. The formal theory of syntactic features builds on the theory of phonological features, and normally takes morphosyntactic features (those expressed in morphology) to be the central case, with other, possibly more abstract features being modeled on the morphosyntactic ones. Many aspects of the formal nature of syntactic features are currently unresolved. Some traditions (such as HPSG) make use of rich feature structures as an analytic tool, while others (such as Minimalism) pursue simplicity in feature structures in the interest of descriptive restrictiveness. Nevertheless, features are essential to all explicit analyses.

Article

Chinese has been known to be a language without grammaticalized tense. However, this statement does not mean that native speakers of Chinese cannot tell the temporal location of a Chinese sentence. In addition to temporal phrases, which often are considered to identify the temporal location of a Chinese sentence, the aspectual value of a sentence, either situation aspect or viewpoint aspect, plays a significant role in this issue. Basically, a telic event receives a past interpretation unless explicitly specified otherwise whereas an atelic event gets a present reading until overridden. In addition, there is a possibility, based on examples where the past has an effect on the whole discourse in Chinese instead of single sentences, that tense is a discourse-level feature in Chinese. On the other hand, Chinese has a rich aspectual system, including four aspect markers: perfective le, experiential guò, durative zhe, and progressive zài. The former two are perfective markers and the latter are imperfective markers. Perfective le interacts with eventualities of different situation types to yield different readings. Going with an accomplishment, perfective le receives a completive reading or terminative reading. Presenting an achievement, perfective le gets a completive reading. Perfective le plus an activity forms an incomplete sentence. Together with a state, perfective le induces an inchoative reading. Most of the theories that explain the diverse readings of perfective le resort to the obvious, differentiating point in the temporal schema of a situation: either the (natural) final endpoint or the initial point. Experiential guò has several semantic properties, including at least one occurrence, a class meaning, discontinuity, compatibility only with recurrable situations, and temporal independence. There are two major accounts for the semantics of experiential guò: the temporal quantification account and the terminability as the sole inherent feature account. For the former, experiential guò is considered a temporal quantifier and all the properties follow from the semantic properties of a quantifier. For the latter, terminability is argued to be the sole inherent semantic property for experiential guò and all the properties are derived from discontinuity. The semantics of the two imperfective makers are less complicated. It is generally accepted that progressive zài presents an unbounded, ongoing event but that durative zhe introduces a resultative state. One possible further semantic distinction between progressive zài and durative zhe is that the former has an instant reading whereas the latter has an interval reading. Moreover, the rhetorical function of durative zhe needs to be considered so that a satisfactory explanation for the V1 zhe V2 construction can be achieved.

Article

Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.

Article

Theme  

Eva Hajičová

In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.

Article

Chinese nominal phrases are typologically distinct from their English counterparts in many aspects. Most strikingly, Chinese is featured with a general classifier system, which not only helps to categorize nouns but also has to do with the issue of quantification. Moreover, it has neither noncontroversial plural markers nor (in)definite markers. Its bare nouns are allowed in various argument positions. As a consequence, Chinese is sometimes characterized as a classifier language, as an argumental language, or as an article-less language. One of the questions arising is whether these apparently different but related properties underscore a single issue: that it is the semantics of nouns that is responsible for all these peculiarities of Mandarin nominal phrases. It has been claimed that Chinese nouns are born as kind terms, from which the object-level readings can be derived, being either existential or definite. Nevertheless, the existence of classifiers in Chinese is claimed to be independent of the kind denotation of its bare nouns. Within the general area of noun semantics, a number of other semantic issues have generated much interest. One is concerned with the availability of the mass/count distinction in Mandarin nominal phrases. Another issue has to do with the semantics of classifiers. Are classifiers required by the noun semantics or the numeral semantics, when occurring in the syntactic context of Numeral/Quantifier-Classifier-Noun? Finally, how is the semantic notion of definiteness understood in article-less languages like Mandarin Chinese? Should its denotation be characterized with uniqueness or familiarity?

Article

Stergios Chatzikyriakidis and Robin Cooper

Type theory is a regime for classifying objects (including events) into categories called types. It was originally designed in order to overcome problems relating to the foundations of mathematics relating to Russell’s paradox. It has made an immense contribution to the study of logic and computer science and has also played a central role in formal semantics for natural languages since the initial work of Richard Montague building on the typed λ-calculus. More recently, type theories following in the tradition created by Per Martin-Löf have presented an important alternative to Montague’s type theory for semantic analysis. These more modern type theories yield a rich collection of types which take on a role of representing semantic content rather than simply structuring the universe in order to avoid paradoxes.