Interest in the linguistics of humor is widespread and dates since classical times. Several theoretical models have been proposed to describe and explain the function of humor in language. The most widely adopted one, the semantic-script theory of humor, was presented by Victor Raskin, in 1985. Its expansion, to incorporate a broader gamut of information, is known as the General Theory of Verbal Humor. Other approaches are emerging, especially in cognitive and corpus linguistics. Within applied linguistics, the predominant approach is analysis of conversation and discourse, with a focus on the disparate functions of humor in conversation. Speakers may use humor pro-socially, to build in-group solidarity, or anti-socially, to exclude and denigrate the targets of the humor. Most of the research has focused on how humor is co-constructed and used among friends, and how speakers support it. Increasingly, corpus-supported research is beginning to reshape the field, introducing quantitative concerns, as well as multimodal data and analyses. Overall, the linguistics of humor is a dynamic and rapidly changing field.
M. Teresa Espinal and Jaume Mateu
Idioms, conceived as fixed multi-word expressions that conceptually encode non-compositional meaning, are linguistic units that raise a number of questions relevant in the study of language and mind (e.g., whether they are stored in the lexicon or in memory, whether they have internal or external syntax similar to other expressions of the language, whether their conventional use is parallel to their non-compositional meaning, whether they are processed in similar ways to regular compositional expressions of the language, etc.). Idioms show some similarities and differences with other sorts of formulaic expressions, the main types of idioms that have been characterized in the linguistic literature, and the dimensions on which idiomaticity lies. Syntactically, idioms manifest a set of syntactic properties, as well as a number of constraints that account for their internal and external structure. Semantically, idioms present an interesting behavior with respect to a set of semantic properties that account for their meaning (i.e., conventionality, compositionality, and transparency, as well as aspectuality, referentiality, thematic roles, etc.). The study of idioms has been approached from lexicographic and computational, as well as from psycholinguistic and neurolinguistic perspectives.
Bo Xue and Haihua Pan
Negative polarity items (NPIs) are well known for their limited distribution, that is, their negation-implicating contexts, the phenomenon of which has attracted much attention in generative linguistics since Klima’s seminal work. There is a large amount of research on NPI licensing that aims to (a) identify the range of potential licensors of NPIs, also known as Ladusaw’s licensor question; (b) ascertain the semantic/logical properties shared by these licensors; (c) elucidate the licensing dependency, for example, whether the dependency between an NPI and its licensor involves a structural requirement like c-command, and (d) shed light on the nature of polarity-sensitive items in natural languages and, more generally, the architectural organization of the syntax–semantics and semantics–pragmatics interfaces. Theories of NPI licensing on the market abound, ranging from Klima’s affectivity to the influential Fauconnier–Ladusaw downward-entailingness (DE) as well as some weakened versions of Ladusaw’s licensing condition like (non-)veridicality and Strawson downward-entailingness. These theories are primarily concerned with pinpointing the logical properties of NPI licensors and elucidating the dependency between a licensor and its licensee. Broadly speaking, NPIs are assumed to be in the scope of some negative element. On the licensor side, various logical properties have been identified, resulting in a more fine-grained distinction between different negative strengths including downward-entailing, anti-additive, and anti-morphic. Moreover, a diverse class of NPIs has been uncovered and differentiated, including English weak NPIs like any/ever, for which simple DE would suffice, and stronger NPIs like in years/the minimizer sleep a wink, which are more selective and correlate with a stronger negative strength, namely, anti-additivity. Further theoretical developments of NPI licensing shift to the nature of NPIs and their communicative roles in a discourse, unearthing important properties like domain-widening in need of semantic strengthening (with its recent implementation in the alternative-and-exhaustification framework), which advances the understanding of their polarity-sensitive profiles. Chinese NPIs include renhe-phrases (similar to English any) and wh-items, and minimizers, all of which are also confined to certain negative semantic contexts and not acceptable if they occur in simple positive episodic sentences without Chinese dou ‘all’. Descriptively, among canonical affective contexts(those including sentential negation, yes–no/wh questions, intensional verbs, if-clauses, imperatives, modals, adversative emotive predicates, adverb dou ‘all’, and the exclusive particle zhiyou ‘only’), renhe-phrases, and wh-items can be licensed by sentential negation, yes–no questions, intensional verbs, if-clauses, imperatives, modals, and the left restrictor of dou ‘all’, whereas minimizers like yi-fen qian ‘one penny’ display a more constrained distribution and can only be licensed by sentential negation, yes–no rhetorical questions, concessive if-clauses, and the left restrictor of dou. There are at least two research questions worth exploring in the future. First, the affective contexts licensing Chinese renhe-phrases, wh-items, and minimizers are not totally the same, with minimizers being more constrained in their distribution. What could explain the unique behavior of Chinese minimizers? Why are these minimizers deviant in modal contexts and in need of the likelihood reasoning? Second, the affective contexts licensing Chinese NPIs do not totally overlap with those licensing English any. What could explain the divergent distributions of NPIs cross-linguistically?