61-70 of 100 Results  for:

Clear all

Article

Iconicity  

Irit Meir and Oksana Tkachman

Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry. There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world. Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.

Article

The Compositional Semantics of Modification  

Sebastian Bücking

Modification is a combinatorial semantic operation between a modifier and a modifiee. Take, for example, vegetarian soup: the attributive adjective vegetarian modifies the nominal modifiee soup and thus constrains the range of potential referents of the complex expression to soups that are vegetarian. Similarly, in Ben is preparing a soup in the camper, the adverbial in the camper modifies the preparation by locating it. Notably, modifiers can have fairly drastic effects; in fake stove, the attribute fake induces that the complex expression singles out objects that seem to be stoves, but are not. Intuitively, modifiers contribute additional information that is not explicitly called for by the target the modifier relates to. Speaking in terms of logic, this roughly says that modification is an endotypical operation; that is, it does not change the arity, or logical type, of the modified target constituent. Speaking in terms of syntax, this predicts that modifiers are typically adjuncts and thus do not change the syntactic distribution of their respective target; therefore, modifiers can be easily iterated (see, for instance, spicy vegetarian soup or Ben prepared a soup in the camper yesterday). This initial characterization sets modification apart from other combinatorial operations such as argument satisfaction and quantification: combining a soup with prepare satisfies an argument slot of the verbal head and thus reduces its arity (see, for instance, *prepare a soup a quiche). Quantification as, for example, in the combination of the quantifier every with the noun soup, maps a nominal property onto a quantifying expression with a different distribution (see, for instance, *a every soup). Their comparatively loose connection to their hosts renders modifiers a flexible, though certainly not random, means within combinatorial meaning constitution. The foundational question is how to work their being endotypical into a full-fledged compositional analysis. On the one hand, modifiers can be considered endotypical functors by virtue of their lexical endowment; for instance, vegetarian would be born a higher-ordered function from predicates to predicates. On the other hand, modification can be considered a rule-based operation; for instance, vegetarian would denote a simple predicate from entities to truth-values that receives its modifying endotypical function only by virtue of a separate modification rule. In order to assess this and related controversies empirically, research on modification pays particular attention to interface questions such as the following: how do structural conditions and the modifying function conspire in establishing complex interpretations? What roles do ontological information and fine-grained conceptual knowledge play in the course of concept combination?

Article

Computational Semantics  

Katrin Erk

Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data. The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique. A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem. Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers. Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data. It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?

Article

Conversational Implicature  

Nicholas Allott

Conversational implicatures (i) are implied by the speaker in making an utterance; (ii) are part of the content of the utterance, but (iii) do not contribute to direct (or explicit) utterance content; and (iv) are not encoded by the linguistic meaning of what has been uttered. In (1), Amelia asserts that she is on a diet, and implicates something different: that she is not having cake. (1)Benjamin:Are you having some of this chocolate cake?Amelia:I’m on a diet. Conversational implicatures are a subset of the implications of an utterance: namely those that are part of utterance content. Within the class of conversational implicatures, there are distinctions between particularized and generalized implicatures; implicated premises and implicated conclusions; and weak and strong implicatures. An obvious question is how implicatures are possible: how can a speaker intentionally imply something that is not part of the linguistic meaning of the phrase she utters, and how can her addressee recover that utterance content? Working out what has been implicated is not a matter of deduction, but of inference to the best explanation. What is to be explained is why the speaker has uttered the words that she did, in the way and in the circumstances that she did. Grice proposed that rational talk exchanges are cooperative and are therefore governed by a Cooperative Principle (CP) and conversational maxims: hearers can reasonably assume that rational speakers will attempt to cooperate and that rational cooperative speakers will try to make their contribution truthful, informative, relevant and clear, inter alia, and these expectations therefore guide the interpretation of utterances. On his view, since addressees can infer implicatures, speakers can take advantage of their ability, conveying implicatures by exploiting the maxims. Grice’s theory aimed to show how implicatures could in principle arise. In contrast, work in linguistic pragmatics has attempted to model their actual derivation. Given the need for a cognitively tractable decision procedure, both the neo-Gricean school and work on communication in relevance theory propose a system with fewer principles than Grice’s. Neo-Gricean work attempts to reduce Grice’s array of maxims to just two (Horn) or three (Levinson), while Sperber and Wilson’s relevance theory rejects maxims and the CP and proposes that pragmatic inference hinges on a single communicative principle of relevance. Conversational implicatures typically have a number of interesting properties, including calculability, cancelability, nondetachability, and indeterminacy. These properties can be used to investigate whether a putative implicature is correctly identified as such, although none of them provides a fail-safe test. A further test, embedding, has also been prominent in work on implicatures. A number of phenomena that Grice treated as implicatures would now be treated by many as pragmatic enrichment contributing to the proposition expressed. But Grice’s postulation of implicatures was a crucial advance, both for its theoretical unification of apparently diverse types of utterance content and for the attention it drew to pragmatic inference and the division of labor between linguistic semantics and pragmatics in theorizing about verbal communication.

Article

Nominal Reference  

Donka Farkas

Nominal reference is central to both linguistic semantics and philosophy of language. On the theoretical side, both philosophers and linguists wrestle with the problem of how the link between nominal expressions and their referents is to be characterized, and what formal tools are most appropriate to deal with this issue. The problem is complex because nominal expression come in a large variety of forms, from simple proper names, pronouns, or bare nouns (Jennifer, they, books) to complex expressions involving determiners and various quantifiers (the/every/no/their answer). While the reference of such expressions is varied, their basic syntactic distribution as subjects or objects of various types, for instance, is homogeneous. Important advances in understanding this tension were made with the advent of the work of R. Montague and that of his successors. The problems involved in understanding the relationship between pronouns and their antecedents in discourse have led to another fundamental theoretical development, namely that of dynamic semantics. On the empirical side, issues at the center of both linguistic and philosophical investigations concern how to best characterize the difference between definite and indefinite nominals, and, more generally, how to understand the large variety of determiner types found both within a language and cross-linguistically. These considerations led to refining the definite/indefinite contrast to include fine-grained specificity distinctions that have been shown to be relevant to various morphosyntactic phenomena across the world’s languages. Considerations concerning nominal reference are thus relevant not only to semantics but also to morphology and syntax. Some questions within the domain of nominal reference have grown into rich subfields of inquiry. This is the case with generic reference, the study of pronominal reference, the study of quantifiers, and the study of the semantics of nominal number marking.

Article

Type Theory for Natural Language Semantics  

Stergios Chatzikyriakidis and Robin Cooper

Type theory is a regime for classifying objects (including events) into categories called types. It was originally designed in order to overcome problems relating to the foundations of mathematics relating to Russell’s paradox. It has made an immense contribution to the study of logic and computer science and has also played a central role in formal semantics for natural languages since the initial work of Richard Montague building on the typed λ-calculus. More recently, type theories following in the tradition created by Per Martin-Löf have presented an important alternative to Montague’s type theory for semantic analysis. These more modern type theories yield a rich collection of types which take on a role of representing semantic content rather than simply structuring the universe in order to avoid paradoxes.

Article

Noun-Modifying Clause Construction in Japanese  

Yoshiko Matsumoto

The noun-modifying clause construction (NMCC) in Japanese is a complex noun phrase in which a prenominal clause is dependent on the head noun. Naturally occurring instances of the construction demonstrate that a single structure, schematized as [[… predicate (finite/adnominal)] Noun], represents a wide range of semantic relations between the head noun and the dependent clause, encompassing some that would be expressed by structurally distinct constructions such as relative clauses, noun complement clauses, and other types of complex noun phrases in other languages, such as English. In that way, the Japanese NMCC demonstrates a clear case of the general noun-modifying construction (GNMCC), that is, an NMCC that has structural uniformity across interpretations that extend beyond the range of relative clauses. One of the notable properties of the Japanese NMCC is that the modifying clause may consist only of the predicate, reflecting the fact that referential density is moderate in Japanese—arguments of a predicate are not required to be overtly expressed either in the main clause or in the modifying clause. Another property of the Japanese NMCC is that there is no explicit marking in the construction that indicates the grammatical or semantic relation between the head noun and the modifying clause. The two major constituents are simply juxtaposed to each other. Successful construal of the intended interpretations of instances of such a construction, in the absence of explicit markings, likely relies on an aggregate of structural, semantic, and pragmatic factors, including the semantic content of the linguistic elements, verb valence information, and the interpreter’s real-world knowledge, in addition to the basic structural information. Researchers with different theoretical approaches have studied Japanese NMCCs or subsets thereof. Syntactic approaches, inspired by generative grammar, have focused mostly on relative clauses and aimed to identify universally recognized syntactic principles. Studies that take the descriptive approach have focused on detailed descriptions and the classification of a wide spectrum of naturally occurring instances of the construction in Japanese. The third and most recent group of studies has emphasized the importance of semantics and pragmatics in accounting for a wide variety of naturally occurring instances. The examination of Japanese NMCCs provides information about the nature of clausal noun modification and affords insights into languages beyond Japanese, as similar phenomena have reportedly been observed crosslinguistically to varying degrees.

Article

Polarity in the Semantics of Natural Language  

Anastasia Giannakidou

This paper provides an overview of polarity phenomena in human languages. There are three prominent paradigms of polarity items: negative polarity items (NPIs), positive polarity items (PPIs), and free choice items (FCIs). What they all have in common is that they have limited distribution: they cannot occur just anywhere, but only inside the scope of licenser, which is negation and more broadly a nonveridical licenser, PPIs, conversely, must appear outside the scope of negation. The need to be in the scope of a licenser creates a semantic and syntactic dependency, as the polarity item must be c-commanded by the licenser at some syntactic level. Polarity, therefore, is a true interface phenomenon and raises the question of well-formedness that depends on both semantics and syntax. Nonveridical polarity contexts can be negative, but also non-monotonic such as modal contexts, questions, other non-assertive contexts (imperatives, subjunctives), generic and habitual sentences, and disjunction. Some NPIs and FCIs appear freely in these contexts in many languages, and some NPIs prefer negative contexts. Within negative licensers, we make a distinction between classically and minimally negative contexts. There are no NPIs that appear only in minimally negative contexts. The distributions of NPIs and FCIs crosslinguistically can be understood in terms of general patterns, and there are individual differences due largely to the lexical semantic content of the polarity item paradigms. Three general patterns can be identified as possible lexical sources of polarity. The first is the presence of a dependent variable in the polarity item—a property characterizing NPIs and FCIs in many languages, including Greek, Mandarin, and Korean. Secondly, the polarity item may be scalar: English any and FCIs can be scalar, but Greek, Korean, and Mandarin NPIs are not. Finally, it has been proposed that NPIs can be exhaustive, but exhaustivity is hard to precisely identify in a non-stipulative way, and does not characterize all NPIs. NPIs that are not exhaustive tend to be referentially vague, which means that the speaker uses them only if she is unable to identify a specific referent for them.

Article

Polysemy  

Agustín Vicente and Ingrid L. Falkum

Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide. Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.

Article

Semantics and Pragmatics of Monkey Communication  

Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler

Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.