Katie Wagner and David Barner
Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception.
One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.
Myrto Grigoroglou and Anna Papafragou
To become competent communicators, children need to learn that what a speaker means often goes beyond the literal meaning of what the speaker says. The acquisition of pragmatics as a field is the study of how children learn to bridge the gap between the semantic meaning of words and structures and the intended meaning of an utterance. Of interest is whether young children are capable of reasoning about others’ intentions and how this ability develops over time.
For a long period, estimates of children’s pragmatic sophistication were mostly pessimistic: early work on a number of phenomena showed that very young communicators were egocentric, oblivious to other interlocutors’ intentions, and overall insensitive to subtle pragmatic aspects of interpretation. Recent years have seen major shifts in the study of children’s pragmatic development. Novel methods and more fine-grained theoretical approaches have led to a reconsideration of older findings on how children acquire pragmatics across a number of phenomena and have produced a wealth of new evidence and theories.
Three areas that have generated a considerable body of developmental work on pragmatics include reference (the relation between words or phrases and entities in the world), implicature (a type of inferred meaning that arises when a speaker violates conversational rules), and metaphor (a case of figurative language). Findings from these three domains suggest that children actively use pragmatic reasoning to delimit potential referents for newly encountered words, can take into account the perspective of a communicative partner, and are sensitive to some aspects of implicated and metaphorical meaning. Nevertheless, children’s success with pragmatic communication is fragile and task-dependent.
Lawrence D. Rosenblum
Research on visual and audiovisual speech information has profoundly influenced the fields of psycholinguistics, perception psychology, and cognitive neuroscience. Visual speech findings have provided some of most the important human demonstrations of our new conception of the perceptual brain as being supremely multimodal. This “multisensory revolution” has seen a tremendous growth in research on how the senses integrate, cross-facilitate, and share their experience with one another.
The ubiquity and apparent automaticity of multisensory speech has led many theorists to propose that the speech brain is agnostic with regard to sense modality: it might not know or care from which modality speech information comes. Instead, the speech function may act to extract supramodal informational patterns that are common in form across energy streams. Alternatively, other theorists have argued that any common information existent across the modalities is minimal and rudimentary, so that multisensory perception largely depends on the observer’s associative experience between the streams. From this perspective, the auditory stream is typically considered primary for the speech brain, with visual speech simply appended to its processing. If the utility of multisensory speech is a consequence of a supramodal informational coherence, then cross-sensory “integration” may be primarily a consequence of the informational input itself. If true, then one would expect to see evidence for integration occurring early in the perceptual process, as well in a largely complete and automatic/impenetrable manner. Alternatively, if multisensory speech perception is based on associative experience between the modal streams, then no constraints on how completely or automatically the senses integrate are dictated. There is behavioral and neurophysiological research supporting both perspectives.
Much of this research is based on testing the well-known McGurk effect, in which audiovisual speech information is thought to integrate to the extent that visual information can affect what listeners report hearing. However, there is now good reason to believe that the McGurk effect is not a valid test of multisensory integration. For example, there are clear cases in which responses indicate that the effect fails, while other measures suggest that integration is actually occurring. By mistakenly conflating the McGurk effect with speech integration itself, interpretations of the completeness and automaticity of multisensory may be incorrect. Future research should use more sensitive behavioral and neurophysiological measures of cross-modal influence to examine these issues.
There are two main theoretical traditions in semantics. One is based on realism, where meanings are described as relations between language and the world, often in terms of truth conditions. The other is cognitivistic, where meanings are identified with mental structures. This article presents some of the main ideas and theories within the cognitivist approach.
A central tenet of cognitively oriented theories of meaning is that there are close connections between the meaning structures and other cognitive processes. In particular, parallels between semantics and visual processes have been studied. As a complement, the theory of embodied cognition focuses on the relation between actions and components of meaning.
One of the main methods of representing cognitive meaning structures is to use images schemas and idealized cognitive models. Such schemas focus on spatial relations between various semantic elements. Images schemas are often constructed using Gestalt psychological notions, including those of trajector and landmark, corresponding to figure and ground. In this tradition, metaphors and metonymies are considered to be central meaning transforming processes.
A related approach is force dynamics. Here, the semantic schemas are construed from forces and their relations rather than from spatial relations. Recent extensions involve cognitive representations of actions and events, which then form the basis for a semantics of verbs.
A third approach is the theory of conceptual spaces. In this theory, meanings are represented as regions of semantic domains such as space, time, color, weight, size, and shape. For example, strong evidence exists that color words in a large variety of languages correspond to such regions. This approach has been extended to a general account of the semantics of some of the main word classes, including adjectives, verbs, and prepositions. The theory of conceptual spaces shows similarities to the older frame semantics and feature analysis, but it puts more emphasis on geometric structures.
A general criticism against cognitive theories of semantics is that they only consider the meaning structures of individuals, but neglect the social aspects of semantics, that is, that meanings are shared within a community. Recent theoretical proposals counter this by suggesting that semantics should be seen as a meeting of minds, that is, communicative processes that lead to the alignment of meanings between individuals. On this approach, semantics is seen as a product of communication, constrained by the cognitive mechanisms of the individuals.
Morphological defectiveness refers to situations where one or more paradigmatic forms of a lexeme are not realized, without plausible syntactic, semantic, or phonological causes. The phenomenon tends to be associated with low-frequency lexemes and loanwords. Typically, defectiveness is gradient, lexeme-specific, and sensitive to the internal structure of paradigms.
The existence of defectiveness is a challenge to acquisition models and morphological theories where there are elsewhere operations to materialize items. For this reason, defectiveness has become a rich field of research in recent years, with distinct approaches that view it as an item-specific idiosyncrasy, as an epiphenomenal result of rule competition, or as a normal morphological alternation within a paradigmatic space.
William F. Hanks
Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus,
(1)A:Oh, there’sthat guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy)
(2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer)
In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?
Carol A. Fowler
The theory of speech perception as direct derives from a general direct-realist account of perception. A realist stance on perception is that perceiving enables occupants of an ecological niche to know its component layouts, objects, animals, and events. “Direct” perception means that perceivers are in unmediated contact with their niche (mediated neither by internally generated representations of the environment nor by inferences made on the basis of fragmentary input to the perceptual systems). Direct perception is possible because energy arrays that have been causally structured by niche components and that are available to perceivers specify (i.e., stand in 1:1 relation to) components of the niche. Typically, perception is multi-modal; that is, perception of the environment depends on specifying information present in, or even spanning, multiple energy arrays.
Applied to speech perception, the theory begins with the observation that speech perception involves the same perceptual systems that, in a direct-realist theory, enable direct perception of the environment. Most notably, the auditory system supports speech perception, but also the visual system, and sometimes other perceptual systems. Perception of language forms (consonants, vowels, word forms) can be direct if the forms lawfully cause specifying patterning in the energy arrays available to perceivers. In Articulatory Phonology, the primitive language forms (constituting consonants and vowels) are linguistically significant gestures of the vocal tract, which cause patterning in air and on the face. Descriptions are provided of informational patterning in acoustic and other energy arrays. Evidence is next reviewed that speech perceivers make use of acoustic and cross modal information about the phonetic gestures constituting consonants and vowels to perceive the gestures.
Significant problems arise for the viability of a theory of direct perception of speech. One is the “inverse problem,” the difficulty of recovering vocal tract shapes or actions from acoustic input. Two other problems arise because speakers coarticulate when they speak. That is, they temporally overlap production of serially nearby consonants and vowels so that there are no discrete segments in the acoustic signal corresponding to the discrete consonants and vowels that talkers intend to convey (the “segmentation problem”), and there is massive context-sensitivity in acoustic (and optical and other modalities) patterning (the “invariance problem”). The present article suggests solutions to these problems.
The article also reviews signatures of a direct mode of speech perception, including that perceivers use cross-modal speech information when it is available and exhibit various indications of perception-production linkages, such as rapid imitation and a disposition to converge in dialect with interlocutors.
An underdeveloped domain within the theory concerns the very important role of longer- and shorter-term learning in speech perception. Infants develop language-specific modes of attention to acoustic speech signals (and optical information for speech), and adult listeners attune to novel dialects or foreign accents. Moreover, listeners make use of lexical knowledge and statistical properties of the language in speech perception. Some progress has been made in incorporating infant learning into a theory of direct perception of speech, but much less progress has been made in the other areas.
While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material.
One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors.
The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction.
Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.
Experimental Semiotics (ES) is a burgeoning new discipline aimed at investigating in the laboratory the development of novel forms of human communication. Conceptually connected to experimental research on language use, ES provides a scientific complement to field studies of spontaneously emerging new languages and studies on the emergence of communication systems among artificial agents.
ES researchers have created quite a few research paradigms to investigate the development of novel forms of human communication. Despite their diversity, these paradigms all rely on the use of semiotic games, that is, games in which people can succeed reliably only after they have developed novel communication systems. Some of these games involve creating novel signs for pre-specified meanings. These games are particularly suitable for studying relatively large communication systems and their structural properties. Other semiotic games involve establishing shared meanings as well as novel signs to communicate about them. These games are typically rather challenging and are particularly suitable for investigating the processes through which novel forms of communication are created.
Considering that ES is a methodological stance rather than a well-defined research theme, researchers have used it to address a greatly heterogeneous set of research questions. Despite this, and despite the recent origins of ES, two of these questions have begun to coalesce into relatively coherent research themes.
The first theme originates from the observation that novel communication systems developed in the laboratory tend to acquire features that are similar to key features of natural language. Most notably, they tend (a) to rely on the use of symbols—that is purely conventional signs—and (b) to adopt a combinatorial design, using a few basic units to express a large number of meanings. ES researchers have begun investigating some of the factors that lead to the acquisition of such features. These investigations suggest two conclusions. The first is that the emergence of symbols depends on the fact that, when repeatedly using non-symbolic signs, people tend to progressively abstract them. The second conclusion is that novel communication systems tend to adopt a combinatorial design more readily when their signs have low degrees of motivation and fade rapidly.
The second research theme originates from the observation that novel communication systems developed in the laboratory tend to begin systematically with motivated—that is non-symbolic—signs. ES investigations of this tendency suggest that it occurs because motivation helps people bootstrap novel forms of communication. Put it another way, these investigations show that it is very difficult for people to bootstrap communication through arbitrary signs.
Game theory provides formal means of representing and explaining action choices in social decision situations where the choices of one participant depend on the choices of another. Game theoretic pragmatics approaches language production and interpretation as a game in this sense. Patterns in language use are explained as optimal, rational, or at least nearly optimal or rational solutions to a communication problem. Three intimately related perspectives on game theoretic pragmatics are sketched here: (i) the evolutionary perspective explains language use as the outcome of some optimization process, (ii) the rationalistic perspective pictures language use as a form of rational decision-making, and (iii) the probabilistic reasoning perspective considers specifically speakers’ and listeners’ beliefs about each other. There are clear commonalities behind these three perspectives, and they may in practice blend into each other.
At the heart of game theoretic pragmatics lies the idea that speaker and listener behavior, when it comes to using a language with a given semantic meaning, are attuned to each other. By focusing on the evolutionary or rationalistic perspective, we can then give a functional account of general patterns in our pragmatic language use. The probabilistic reasoning perspective invites modeling actual speaker and listener behavior, for example, as it shows in quantitative aspects of experimental data.
Interest in the linguistics of humor is widespread and dates since classical times. Several theoretical models have been proposed to describe and explain the function of humor in language. The most widely adopted one, the semantic-script theory of humor, was presented by Victor Raskin, in 1985. Its expansion, to incorporate a broader gamut of information, is known as the General Theory of Verbal Humor. Other approaches are emerging, especially in cognitive and corpus linguistics. Within applied linguistics, the predominant approach is analysis of conversation and discourse, with a focus on the disparate functions of humor in conversation. Speakers may use humor pro-socially, to build in-group solidarity, or anti-socially, to exclude and denigrate the targets of the humor. Most of the research has focused on how humor is co-constructed and used among friends, and how speakers support it. Increasingly, corpus-supported research is beginning to reshape the field, introducing quantitative concerns, as well as multimodal data and analyses. Overall, the linguistics of humor is a dynamic and rapidly changing field.
Kimi Akita and Mark Dingemanse
Ideophones, also termed mimetics or expressives, are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in the languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.
Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalized depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
A fundamental question in epistemological philosophy is whether reason may be based on a priori knowledge—that is, knowledge that precedes and which is independent of experience. In modern science, the concept of innateness has been associated with particular behaviors and types of knowledge, which supposedly have been present in the organism since birth (in fact, since fertilization)—prior to any sensory experience with the environment.
This line of investigation has been traditionally linked to two general types of qualities: the first consists of instinctive and inflexible reflexes, traits, and behaviors, which are apparent in survival, mating, and rearing activities. The other relates to language and cognition, with certain concepts, ideas, propositions, and particular ways of mental computation suggested to be part of one’s biological make-up. While both these types of innatism have a long history (e.g., debate by Plato and Descartes), some bias appears to exist in favor of claims for inherent behavioral traits, which are typically accepted when satisfactory empirical evidence is provided. One famous example is Lorenz’s demonstration of imprinting, a natural phenomenon that obeys a predetermined mechanism and schedule (incubator-hatched goslings imprinted on Lorenz’s boots, the first moving object they encountered). Likewise, there seems to be little controversy in regard to predetermined ways of organizing sensory information, as is the case with the detection and classification of shapes and colors by the mind.
In contrast, the idea that certain types of abstract knowledge may be part of an organism’s biological endowment (i.e., not learned) is typically met with a greater sense of skepticism. The most influential and controversial claim for such innate knowledge in modern science is Chomsky’s nativist theory of Universal Grammar in language, which aims to define the extent to which human languages can vary; and the famous Argument from the Poverty of the Stimulus. The main Chomskyan hypothesis is that all human beings share a preprogrammed linguistic infrastructure consisting of a finite set of general principles, which can generate (through combination or transformation) an infinite number of (only) grammatical sentences. Thus, the innate grammatical system constrains and structures the acquisition and use of all natural languages.
Laurie Beth Feldman and Judith F. Kroll
We summarize findings from across a range of methods, including behavioral measures of overall processing speed and accuracy, electrophysiological indices that tap into the early time course of language processing, and neural measures using structural and functional imaging. We argue that traditional claims about rigid constraints on the ability of late bilinguals to exploit the meaning and form of the morphology and morphosyntax in a second language should be revised so as to move away from all or none command of structures motivated from strict dichotomies among linguistic categories of morphology. We describe how the dynamics of morphological processing in neither monolingual or bilingual speakers is easily characterized in terms of the potential to decompose words into their constituent morphemes and that morphosyntactic processing is not easily characterized in terms of categories of structures that are learnable and those that are unlearnable by bilingual and nonnative speakers. Instead, we emphasize the high degree of variability across individuals and plasticity within individuals in their ability to successfully learn and use even subtle aspects of a second language. Further, both of the bilingual’s two languages become active when even one language is engaged, and parallel activation has consequences that shape both languages, thus their influence is not in the unidirectional manner that was traditionally assumed. We briefly discuss the nature of possible constraints and directions for future research.
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.
Petar Milin and James P. Blevins
Studies of the structure and function of paradigms are as old as the Western grammatical tradition. The central role accorded to paradigms in traditional approaches largely reflects the fact that paradigms exhibit systematic patterns of interdependence that facilitate processes of analogical generalization. The recent resurgence of interest in word-based models of morphological processing and morphological structure more generally has provoked a renewed interest in paradigmatic dimensions of linguistic structure. Current methods for operationalizing paradigmatic relations and determining the behavioral correlates of these relations extend paradigmatic models beyond their traditional boundaries. The integrated perspective that emerges from this work is one in which variation at the level of individual words is not meaningful in isolation, but rather guides the association of words to paradigmatic contexts that play a role in their interpretation.
Marieke Woensdregt and Kenny Smith
Pragmatics is the branch of linguistics that deals with language use in context. It looks at the meaning linguistic utterances can have beyond their literal meaning (implicature), and also at presupposition and turn taking in conversation. Thus, pragmatics lies on the interface between language and social cognition.
From the point of view of both speaker and listener, doing pragmatics requires reasoning about the minds of others. For instance, a speaker has to think about what knowledge they share with the listener to choose what information to explicitly encode in their utterance and what to leave implicit. A listener has to make inferences about what the speaker meant based on the context, their knowledge about the speaker, and their knowledge of general conventions in language use. This ability to reason about the minds of others (usually referred to as “mindreading” or “theory of mind”) is a cognitive capacity that is uniquely developed in humans compared to other animals.
What we know about how pragmatics (and the underlying ability to make inferences about the minds of others) has evolved. Biological evolution and cultural evolution are the two main processes that can lead to the development of a complex behavior over generations, and we can explore to what extent they account for what we know about pragmatics.
In biological evolution, changes happen as a result of natural selection on genetically transmitted traits. In cultural evolution on the other hand, selection happens on skills that are transmitted through social learning. Many hypotheses have been put forward about the role that natural selection may have played in the evolution of social and communicative skills in humans (for example, as a result of changes in food sources, foraging strategy, or group size). The role of social learning and cumulative culture, however, has been often overlooked. This omission is particularly striking in the case of pragmatics, as language itself is a prime example of a culturally transmitted skill, and there is solid evidence that the pragmatic capacities that are so central to language use may themselves be partially shaped by social learning.
In light of empirical findings from comparative, developmental, and experimental research, we can consider the potential contributions of both biological and cultural evolutionary mechanisms to the evolution of pragmatics. The dynamics of types of evolutionary processes can also be explored using experiments and computational models.
Daniel Schmidtke and Victor Kuperman
Lexical representations in an individual mind are not given to direct scrutiny. Thus, in their theorizing of mental representations, researchers must rely on observable and measurable outcomes of language processing, that is, perception, production, storage, access, and retrieval of lexical information. Morphological research pursues these questions utilizing the full arsenal of analytical tools and experimental techniques that are at the disposal of psycholinguistics. This article outlines the most popular approaches, and aims to provide, for each technique, a brief overview of its procedure in experimental practice. Additionally, the article describes the link between the processing effect(s) that the tool can elicit and the representational phenomena that it may shed light on. The article discusses methods of morphological research in the two major human linguistic faculties—production and comprehension—and provides a separate treatment of spoken, written and sign language.
Corpora are an all-important resource in linguistics, as they constitute the primary source for large-scale examples of language usage. This has been even more evident in recent years, with the increasing availability of texts in digital format leading more and more corpus linguistics toward a “big data” approach. As a consequence, the quantitative methods adopted in the field are becoming more sophisticated and various.
When it comes to morphology, corpora represent a primary source of evidence to describe morpheme usage, and in particular how often a particular morphological pattern is attested in a given language. There is hence a tight relation between corpus linguistics and the study of morphology and the lexicon. This relation, however, can be considered bi-directional. On the one hand, corpora are used as a source of evidence to develop metrics and train computational models of morphology: by means of corpus data it is possible to quantitatively characterize morphological notions such as productivity, and corpus data are fed to computational models to capture morphological phenomena at different levels of description. On the other hand, morphology has also been applied as an organization principle to corpora. Annotations of linguistic data often adopt morphological notions as guidelines. The resulting information, either obtained from human annotators or relying on automatic systems, makes corpora easier to analyze and more convenient to use in a number of applications.
Relevance theory is a cognitive approach to pragmatics which starts from two broadly Gricean assumptions: (a) that much human communication, both verbal and non-verbal, involves the overt expression and inferential recognition of intentions, and (b) that in inferring these intentions, the addressee presumes that the communicator’s behavior will meet certain standards, which for Grice are based on a Cooperative Principle and maxims, and for relevance theory are derived from the assumption that, as a result of constant selection pressures in the course of human evolution, both cognition and communication are relevance-oriented. Relevance is defined in terms of cognitive (or contextual) effects and processing effort: other things being equal, the greater the cognitive effects and the smaller the processing effort, the greater the relevance.
A long-standing aim of relevance theory has been to show that building an adequate theory of communication involves going beyond Grice’s notion of speaker’s meaning. Another is to provide a conceptually unified account of how a much broader variety of communicative acts than Grice was concerned with—including cases of both showing that and telling that—are understood. The resulting pragmatic theory differs from Grice’s in several respects. It sees explicit communication as much richer and more inferential than Grice thought, with encoded sentence meanings providing no more than clues to the speaker’s intentions. It rejects the close link that Grice saw between implicit communication and (real or apparent) maxim violation, showing in particular how figurative utterances might arise naturally and spontaneously in the course of communication. It offers an account of vagueness or indeterminacy in communication, which is often abstracted away from in more formally oriented frameworks. It investigates the role of context in comprehension, and shows how tentative hypotheses about the intended combination of explicit content, contextual assumptions, and implicatures might be refined and mutually adjusted in the course of the comprehension process in order to satisfy expectations of relevance.
Relevance theory treats the borderline between semantics and pragmatics as co-extensive with the borderline between (linguistic) decoding and (pragmatic) inference. It sees encoded sentence meanings as typically fragmentary and incomplete, and as having to undergo inferential enrichment or elaboration in order to yield fully propositional forms. It reanalyzes Grice’s conventional implicatures—which he saw as semantic but non-truth-conditional aspects of the meaning of words like but and so—as encoding procedural information with dedicated pragmatic or more broadly cognitive functions, and extends the notion of procedural meaning to a range of further items such as pronouns, discourse particles, mood indicators, and affective intonation.