Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception.
One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.
Article
The Acquisition of Color Words
Katie Wagner and David Barner
Article
Connectionism in Linguistic Theory
Xiaowei Zhao
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Article
Experimental Pragmatics
Florian Schwarz
While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material.
One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors.
The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction.
Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.
Article
Iconicity
Irit Meir and Oksana Tkachman
Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry.
There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world.
Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.
Article
Ideophones (Mimetics, Expressives)
Kimi Akita and Mark Dingemanse
Ideophones, also termed mimetics or expressives, are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in the languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.
Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalized depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
Article
Iterative Meaning in Mandarin Chinese: Linguistic Factors and its Processing
Huei-ling Lai and Yao-Ying Lai
Sentential meaning that emerges compositionally is not always transparent as one-to-one mapping from syntactic structure to semantic representation; oftentimes, the meaning is underspecified (morphosyntactically unsupported), not explicitly conveyed via overt linguistic devices. Compositional meaning is obtained during comprehension. The associated issues are explored by examining linguistic factors that modulate the construal of underspecified iterative meaning in Mandarin Chinese (MC). In this case, the factors include lexical aspect of verbs, the interval-lengths denoted by post-verbal durative adverbials, and boundary specificity denoted by preverbal versus post-verbal temporal adverbials. The composition of a punctual verb (e.g., jump, poke) with a durative temporal adverbial like Zhangsan tiao-le shi fenzhong. Zhangsan jump-LE ten minute ‘Zhangsan jumped for ten minutes’ engenders an iterative meaning, which is morphosyntactically absent yet fully understood by comprehenders. Contrastively, the counterpart involving a durative verb (e.g., run, swim) like Zhangsan pao-le shi fenzhong Zhangsan run-LE ten minute ‘Zhangsan ran for ten minutes’ engenders a continuous reading with identical syntactic structure. Psycholinguistically, processing such underspecified meaning in real time has been shown to require greater effort than the transparent counterpart. This phenomenon has been attested cross-linguistically; yet how it is manifested in MC, a tenseless language, remains understudied. In addition, durative temporal adverbials like yizhi/buduandi ‘continuously,’ which appear preverbally in MC, also engender an iterative meaning when composed with a punctual verb like Zhangsan yizhi/buduandi tiao. Zhangsan continuously jump ‘Zhangsan jumped continuously.’ Crucially, unlike the post-verbal adverbials that encode specific boundaries for the denoted intervals, these preverbal adverbials refer to continuous time spans without specific endpoints. The difference in boundary specificity between the two adverbial types, while both being durative, is hypothesized to modulate the processing profiles of aspectual comprehension.
Results of the online (timed) questionnaire showed (a) an effect of boundary specificity: sentences with post-verbal adverbials that encode [+specific boundary] were rated lower in the naturalness-rating task and induced longer response time (RT) in iterativity judgements, as compared to preverbal adverbials that encode [−specific boundary]; (b) in composition with post-verbal adverbials that are [+specific boundary], sentences involving durative verbs elicited lower rating scores and longer RT of iterativity judgements than the counterpart involving punctual verbs. These suggest that the comprehension of underspecified iterative meaning is modulated by both cross-linguistically similar parameters and language-specific systems of temporal reference, by which MC exhibits a typological difference in processing profiles. Overall, the patterns are consistent with the Context-Dependence approach to semantic underspecification: comprehenders compute the ultimate reading (iterative versus continuous) by taking both the sentential and extra-sentential information into consideration in a given context.
Article
Lexical Acquisition and the Structure of the Mental Lexicon
Eve V. Clark
The words and word-parts children acquire at different stages offer insights into how the mental lexicon might be organized. Children first identify ‘words,’ recurring sequences of sounds, in the speech stream, attach some meaning to them, and, later, analyze such words further into parts, namely stems and affixes. These are the elements they store in memory in order to recognize them on subsequent occasions. They also serve as target models when children try to produce those words themselves. When they coin words, they make use of bare stems, combine certain stems with each other, and sometimes add affixes as well. The options they choose depend on how much they need to add to coin a new word, which familiar elements they can draw on, and how productive that option is in the language. Children’s uses of stems and affixes in coining new words also reveal that they must be relying on one representation in comprehension and a different representation in production. For comprehension, they need to store information about the acoustic properties of a word, taking into account different occasions, different speakers, and different dialects, not to mention second-language speakers. For production, they need to work out which articulatory plan to follow in order to reproduce the target word. And they take time to get their production of a word aligned with the representation they have stored for comprehension. In fact, there is a general asymmetry here, with comprehension being ahead of production for children, and also being far more extensive than production, for both children and adults. Finally, as children add more words to their repertoires, they organize and reorganize their vocabulary into semantic domains. In doing this, they make use of pragmatic directions from adults that help them link related words through a variety of semantic relations.
Article
Number in Language
Paolo Acquaviva
Number is the category through which languages express information about the individuality, numerosity, and part structure of what we speak about. As a linguistic category it has a morphological, a morphosyntactic, and a semantic dimension, which are variously interrelated across language systems. Number marking can apply to a more or less restricted part of the lexicon of a language, being most likely on personal pronouns and human/animate nouns, and least on inanimate nouns. In the core contrast, number allows languages to refer to ‘many’ through the description of ‘one’; the sets referred to consist of tokens of the same type, but also of similar types, or of elements pragmatically associated with one named individual. In other cases, number opposes a reading of ‘one’ to a reading as ‘not one,’ which includes masses; when the ‘one’ reading is morphologically derived from the ‘not one,’ it is called a singulative. It is rare for a language to have no linguistic number at all, since a ‘one–many’ opposition is typically implied at least in pronouns, where the category of person discriminates the speaker as ‘one.’ Beyond pronouns, number is typically a property of nouns and/or determiners, although it can appear on other word classes by agreement. Verbs can also express part-structural properties of events, but this ‘verbal number’ is not isomorphic to nominal number marking. Many languages allow a variable proportion of their nominals to appear in a ‘general’ form, which expresses no number information. The main values of number-marked elements are singular and plural; dual and a much rarer trial also exist. Many languages also distinguish forms interpreted as paucals or as greater plurals, respectively, for small and usually cohesive groups and for generically large ones. A broad range of exponence patterns can express these contrasts, depending on the morphological profile of a language, from word inflections to freestanding or clitic forms; certain choices of classifiers also express readings that can be described as ‘plural,’ at least in certain interpretations. Classifiers can co-occur with other plurality markers, but not when these are obligatory as expressions of an inflectional paradigm, although this is debated, partly because the notion of classifier itself subsumes distinct phenomena. Many languages, especially those with classifiers, encode number not as an inflectional category, but through word-formation operations that express readings associated with plurality, including large size. Current research on number concerns all its morphological, morphosyntactic, and semantic dimensions, in particular the interrelations of them as part of the study of natural language typology and of the formal analysis of nominal phrases. The grammatical and semantic function of number and plurality are particularly prominent in formal semantics and in syntactic theory.
Article
Polysemy
Agustín Vicente and Ingrid L. Falkum
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice.
Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses?
Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.