1-5 of 5 Results

  • Keywords: context x
Clear all

Article

Francis Jeffry Pelletier

Most linguists have heard of semantic compositionality. Some will have heard that it is the fundamental truth of semantics. Others will have been told that it is so thoroughly and completely wrong that it is astonishing that it is still being taught. The present article attempts to explain all this. Much of the discussion of semantic compositionality takes place in three arenas that are rather insulated from one another: (a) philosophy of mind and language, (b) formal semantics, and (c) cognitive linguistics and cognitive psychology. A truly comprehensive overview of the writings in all these areas is not possible here. However, this article does discuss some of the work that occurs in each of these areas. A bibliography of general works, and some Internet resources, will help guide the reader to some further, undiscussed works (including further material in all three categories).

Article

Game theory provides formal means of representing and explaining action choices in social decision situations where the choices of one participant depend on the choices of another. Game theoretic pragmatics approaches language production and interpretation as a game in this sense. Patterns in language use are explained as optimal, rational, or at least nearly optimal or rational solutions to a communication problem. Three intimately related perspectives on game theoretic pragmatics are sketched here: (i) the evolutionary perspective explains language use as the outcome of some optimization process, (ii) the rationalistic perspective pictures language use as a form of rational decision-making, and (iii) the probabilistic reasoning perspective considers specifically speakers’ and listeners’ beliefs about each other. There are clear commonalities behind these three perspectives, and they may in practice blend into each other. At the heart of game theoretic pragmatics lies the idea that speaker and listener behavior, when it comes to using a language with a given semantic meaning, are attuned to each other. By focusing on the evolutionary or rationalistic perspective, we can then give a functional account of general patterns in our pragmatic language use. The probabilistic reasoning perspective invites modeling actual speaker and listener behavior, for example, as it shows in quantitative aspects of experimental data.

Article

Klaus Beyer and Henning Schreiber

The Social Network Analysis approach (SNA), also known as sociometrics or actor-network analysis, investigates social structure on the basis of empirically recorded social ties between actors. It thereby aims to explain e.g. the processes of flow of information, spreading of innovations, or even pathogens throughout the network by actor roles and their relative positions in the network based on quantitative and qualitative analyses. While the approach has a strong mathematical and statistical component, the identification of pertinent social ties also requires a strong ethnographic background. With regard to social categorization, SNA is well suited as a bootstrapping technique for highly dynamic communities and under-documented contexts. Currently, SNA is widely applied in various academic fields. For sociolinguists, it offers a framework for explaining the patterning of linguistic variation and mechanisms of language change in a given speech community. The social tie perspective developed around 1940, in the field of sociology and social anthropology based on the ideas of Simmel, and was applied later in fields such as innovation theory. In sociolinguistics, it is strongly connected to the seminal work of Lesley and James Milroy and their Belfast studies (1978, 1985). These authors demonstrate that synchronic speaker variation is not only governed by broad societal categories but is also a function of communicative interaction between speakers. They argue that the high level of resistance against linguistic change in the studied community is a result of strong and multiplex ties between the actors. Their approach has been followed by various authors, including Gal, Lippi-Green, and Labov, and discussed for a variety of settings; most of them, however, are located in the Western world. The methodological advantages could make SNA the preferred framework for variation studies in Africa due to the prevailing dynamic multilingual conditions, often on the backdrop of less standardized languages. However, rather few studies using SNA as a framework have yet been conducted. This is possibly due to the quite demanding methodological requirements, the overall effort, and the often highly complex linguistic backgrounds. A further potential obstacle is the pace of theoretical development in SNA. Since its introduction to sociolinguistics, various new measures and statistical techniques have been developed by the fast growing SNA community. Receiving this vast amount of recent literature and testing new concepts is likewise a challenge for the application of SNA in sociolinguistics. Nevertheless, the overall methodological effort of SNA has been much reduced by the advancements in recording technology, data processing, and the introduction of SNA software (UCINET) and packages for network statistics in R (‘sna’). In the field of African sociolinguistics, a more recent version of SNA has been implemented in a study on contact-induced variation and change in Pana and Samo, two speech communities in the Northwest of Burkina Faso. Moreover, further enhanced applications are on the way for Senegal and Cameroon, and even more applications in the field of African languages are to be expected.

Article

Deirdre Wilson

Relevance theory is a cognitive approach to pragmatics which starts from two broadly Gricean assumptions: (a) that much human communication, both verbal and non-verbal, involves the overt expression and inferential recognition of intentions, and (b) that in inferring these intentions, the addressee presumes that the communicator’s behavior will meet certain standards, which for Grice are based on a Cooperative Principle and maxims, and for relevance theory are derived from the assumption that, as a result of constant selection pressures in the course of human evolution, both cognition and communication are relevance-oriented. Relevance is defined in terms of cognitive (or contextual) effects and processing effort: other things being equal, the greater the cognitive effects and the smaller the processing effort, the greater the relevance. A long-standing aim of relevance theory has been to show that building an adequate theory of communication involves going beyond Grice’s notion of speaker’s meaning. Another is to provide a conceptually unified account of how a much broader variety of communicative acts than Grice was concerned with—including cases of both showing that and telling that—are understood. The resulting pragmatic theory differs from Grice’s in several respects. It sees explicit communication as much richer and more inferential than Grice thought, with encoded sentence meanings providing no more than clues to the speaker’s intentions. It rejects the close link that Grice saw between implicit communication and (real or apparent) maxim violation, showing in particular how figurative utterances might arise naturally and spontaneously in the course of communication. It offers an account of vagueness or indeterminacy in communication, which is often abstracted away from in more formally oriented frameworks. It investigates the role of context in comprehension, and shows how tentative hypotheses about the intended combination of explicit content, contextual assumptions, and implicatures might be refined and mutually adjusted in the course of the comprehension process in order to satisfy expectations of relevance. Relevance theory treats the borderline between semantics and pragmatics as co-extensive with the borderline between (linguistic) decoding and (pragmatic) inference. It sees encoded sentence meanings as typically fragmentary and incomplete, and as having to undergo inferential enrichment or elaboration in order to yield fully propositional forms. It reanalyzes Grice’s conventional implicatures—which he saw as semantic but non-truth-conditional aspects of the meaning of words like but and so—as encoding procedural information with dedicated pragmatic or more broadly cognitive functions, and extends the notion of procedural meaning to a range of further items such as pronouns, discourse particles, mood indicators, and affective intonation.

Article

Béatrice Godart-Wendling

The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages: The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth. The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context. The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–). The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.