1-7 of 7 Results

  • Keywords: inference x
Clear all

Article

Game theory provides formal means of representing and explaining action choices in social decision situations where the choices of one participant depend on the choices of another. Game theoretic pragmatics approaches language production and interpretation as a game in this sense. Patterns in language use are explained as optimal, rational, or at least nearly optimal or rational solutions to a communication problem. Three intimately related perspectives on game theoretic pragmatics are sketched here: (i) the evolutionary perspective explains language use as the outcome of some optimization process, (ii) the rationalistic perspective pictures language use as a form of rational decision-making, and (iii) the probabilistic reasoning perspective considers specifically speakers’ and listeners’ beliefs about each other. There are clear commonalities behind these three perspectives, and they may in practice blend into each other. At the heart of game theoretic pragmatics lies the idea that speaker and listener behavior, when it comes to using a language with a given semantic meaning, are attuned to each other. By focusing on the evolutionary or rationalistic perspective, we can then give a functional account of general patterns in our pragmatic language use. The probabilistic reasoning perspective invites modeling actual speaker and listener behavior, for example, as it shows in quantitative aspects of experimental data.

Article

Walter Bisang

Linguistic change not only affects the lexicon and the phonology of words, it also operates on the grammar of a language. In this context, grammaticalization is concerned with the development of lexical items into markers of grammatical categories or, more generally, with the development of markers used for procedural cueing of abstract relationships out of linguistic items with concrete referential meaning. A well-known example is the English verb go in its function of a future marker, as in She is going to visit her friend. Phenomena like these are very frequent across the world’s languages and across many different domains of grammatical categories. In the last 50 years, research on grammaticalization has come up with a plethora of (a) generalizations, (b) models of how grammaticalization works, and (c) methodological refinements. On (a): Processes of grammaticalization develop gradually, step by step, and the sequence of the individual stages follows certain clines as they have been generalized from cross-linguistic comparison (unidirectionality). Even though there are counterexamples that go against the directionality of various clines, their number seems smaller than assumed in the late 1990s. On (b): Models or scenarios of grammaticalization integrate various factors. Depending on the theoretical background, grammaticalization and its results are motivated either by the competing motivations of economy vs. iconicity/explicitness in functional typology or by a change from movement to merger in the minimalist program. Pragmatic inference is of central importance for initiating processes of grammaticalization (and maybe also at later stages), and it activates mechanisms like reanalysis and analogy, whose status is controversial in the literature. Finally, grammaticalization does not only work within individual languages/varieties, it also operates across languages. In situations of contact, the existence of a certain grammatical category may induce grammaticalization in another language. On (c): Even though it is hard to measure degrees of grammaticalization in terms of absolute and exact figures, it is possible to determine relative degrees of grammaticalization in terms of the autonomy of linguistic signs. Moreover, more recent research has come up with criteria for distinguishing grammaticalization and lexicalization (defined as the loss of productivity, transparency, and/or compositionality of former productive, transparent, and compositional structures). In spite of these findings, there are still quite a number of questions that need further research. Two questions to be discussed address basic issues concerning the overall properties of grammaticalization. (1) What is the relation between constructions and grammaticalization? In the more traditional view, constructions are seen as the syntactic framework within which linguistic items are grammaticalized. In more recent approaches based on construction grammar, constructions are defined as combinations of form and meaning. Thus, grammaticalization can be seen in the light of constructionalization, i.e., the creation of new combinations of form and meaning. Even though constructionalization covers many apects of grammaticalization, it does not exhaustively cover the domain of grammaticalization. (2) Is grammaticalization cross-linguistically homogeneous, or is there a certain range of variation? There is evidence from East and mainland Southeast Asia that there is cross-linguistic variation to some extent.

Article

Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.

Article

Elizabeth Closs Traugott

Traditional approaches to semantic change typically focus on outcomes of meaning change and list types of change such as metaphoric and metonymic extension, broadening and narrowing, and the development of positive and negative meanings. Examples are usually considered out of context, and are lexical members of nominal and adjectival word classes. However, language is a communicative activity that is highly dependent on context, whether that of the ongoing discourse or of social and ideological changes. Much recent work on semantic change has focused, not on results of change, but on pragmatic enabling factors for change in the flow of speech. Attention has been paid to the contributions of cognitive processes, such as analogical thinking, production of cues as to how a message is to be interpreted, and perception or interpretation of meaning, especially in grammaticalization. Mechanisms of change such as metaphorization, metonymization, and subjectification have been among topics of special interest and debate. The work has been enabled by the fine-grained approach to contextual data that electronic corpora allow.

Article

Nicholas Allott

Conversational implicatures (i) are implied by the speaker in making an utterance; (ii) are part of the content of the utterance, but (iii) do not contribute to direct (or explicit) utterance content; and (iv) are not encoded by the linguistic meaning of what has been uttered. In (1), Amelia asserts that she is on a diet, and implicates something different: that she is not having cake. (1)Benjamin:Are you having some of this chocolate cake?Amelia:I’m on a diet. Conversational implicatures are a subset of the implications of an utterance: namely those that are part of utterance content. Within the class of conversational implicatures, there are distinctions between particularized and generalized implicatures; implicated premises and implicated conclusions; and weak and strong implicatures. An obvious question is how implicatures are possible: how can a speaker intentionally imply something that is not part of the linguistic meaning of the phrase she utters, and how can her addressee recover that utterance content? Working out what has been implicated is not a matter of deduction, but of inference to the best explanation. What is to be explained is why the speaker has uttered the words that she did, in the way and in the circumstances that she did. Grice proposed that rational talk exchanges are cooperative and are therefore governed by a Cooperative Principle (CP) and conversational maxims: hearers can reasonably assume that rational speakers will attempt to cooperate and that rational cooperative speakers will try to make their contribution truthful, informative, relevant and clear, inter alia, and these expectations therefore guide the interpretation of utterances. On his view, since addressees can infer implicatures, speakers can take advantage of their ability, conveying implicatures by exploiting the maxims. Grice’s theory aimed to show how implicatures could in principle arise. In contrast, work in linguistic pragmatics has attempted to model their actual derivation. Given the need for a cognitively tractable decision procedure, both the neo-Gricean school and work on communication in relevance theory propose a system with fewer principles than Grice’s. Neo-Gricean work attempts to reduce Grice’s array of maxims to just two (Horn) or three (Levinson), while Sperber and Wilson’s relevance theory rejects maxims and the CP and proposes that pragmatic inference hinges on a single communicative principle of relevance. Conversational implicatures typically have a number of interesting properties, including calculability, cancelability, nondetachability, and indeterminacy. These properties can be used to investigate whether a putative implicature is correctly identified as such, although none of them provides a fail-safe test. A further test, embedding, has also been prominent in work on implicatures. A number of phenomena that Grice treated as implicatures would now be treated by many as pragmatic enrichment contributing to the proposition expressed. But Grice’s postulation of implicatures was a crucial advance, both for its theoretical unification of apparently diverse types of utterance content and for the attention it drew to pragmatic inference and the division of labor between linguistic semantics and pragmatics in theorizing about verbal communication.

Article

Deirdre Wilson

Relevance theory is a cognitive approach to pragmatics which starts from two broadly Gricean assumptions: (a) that much human communication, both verbal and non-verbal, involves the overt expression and inferential recognition of intentions, and (b) that in inferring these intentions, the addressee presumes that the communicator’s behavior will meet certain standards, which for Grice are based on a Cooperative Principle and maxims, and for relevance theory are derived from the assumption that, as a result of constant selection pressures in the course of human evolution, both cognition and communication are relevance-oriented. Relevance is defined in terms of cognitive (or contextual) effects and processing effort: other things being equal, the greater the cognitive effects and the smaller the processing effort, the greater the relevance. A long-standing aim of relevance theory has been to show that building an adequate theory of communication involves going beyond Grice’s notion of speaker’s meaning. Another is to provide a conceptually unified account of how a much broader variety of communicative acts than Grice was concerned with—including cases of both showing that and telling that—are understood. The resulting pragmatic theory differs from Grice’s in several respects. It sees explicit communication as much richer and more inferential than Grice thought, with encoded sentence meanings providing no more than clues to the speaker’s intentions. It rejects the close link that Grice saw between implicit communication and (real or apparent) maxim violation, showing in particular how figurative utterances might arise naturally and spontaneously in the course of communication. It offers an account of vagueness or indeterminacy in communication, which is often abstracted away from in more formally oriented frameworks. It investigates the role of context in comprehension, and shows how tentative hypotheses about the intended combination of explicit content, contextual assumptions, and implicatures might be refined and mutually adjusted in the course of the comprehension process in order to satisfy expectations of relevance. Relevance theory treats the borderline between semantics and pragmatics as co-extensive with the borderline between (linguistic) decoding and (pragmatic) inference. It sees encoded sentence meanings as typically fragmentary and incomplete, and as having to undergo inferential enrichment or elaboration in order to yield fully propositional forms. It reanalyzes Grice’s conventional implicatures—which he saw as semantic but non-truth-conditional aspects of the meaning of words like but and so—as encoding procedural information with dedicated pragmatic or more broadly cognitive functions, and extends the notion of procedural meaning to a range of further items such as pronouns, discourse particles, mood indicators, and affective intonation.

Article

Andrew Hippisley

The morphological machinery of a language is at the service of syntax, but the service can be poor. A request may result in the wrong item (deponency), or in an item the syntax already has (syncretism), or in an abundance of choices (inflectional classes or morphological allomorphy). Network Morphology regulates the service by recreating the morphosyntactic space as a network of information sharing nodes, where sharing is through inheritance, and inheritance can be overridden to allow for the regular, irregular, and, crucially, the semiregular. The network expresses the system; the way the network can be accessed expresses possible deviations from the systematic. And so Network Morphology captures the semi-systematic nature of morphology. The key data used to illustrate Network Morphology are noun inflections in the West Slavonic language Lower Sorbian, which has three genders, a rich case system and three numbers. These data allow us to observe how Network Morphology handles inflectional allomorphy, syncretism, feature neutralization, and irregularity. Latin deponent verbs are used to illustrate a Network Morphology account of morphological mismatch, where morphosyntactic features used in the syntax are expressed by morphology regularly used for different features. The analysis points to a separation of syntax and morphology in the architecture of the grammar. An account is given of Russian nominal derivation which assumes such a separation, and is based on viewing derivational morphology as lexical relatedness. Areas of the framework receiving special focus include default inheritance, global and local inheritance, default inference, and orthogonal multiple inheritance. The various accounts presented are expressed in the lexical knowledge representation language DATR, due to Roger Evans and Gerald Gazdar.