1-20 of 51 Results  for:

  • Psycholinguistics x
Clear all

Article

Acceptability Judgments  

James Myers

Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.

Article

Acoustic Theories of Speech Perception  

Melissa Redford and Melissa Baese-Berk

Acoustic theories assume that speech perception begins with an acoustic signal transformed by auditory processing. In classical acoustic theory, this assumption entails perceptual primitives that are akin to those identified in the spectral analyses of speech. The research objective is to link these primitives with phonological units of traditional descriptive linguistics via sound categories and then to understand how these units/categories are bound together in time to recognize words. Achieving this objective is challenging because the signal is replete with variation, making the mapping of signal to sound category nontrivial. Research that grapples with the mapping problem has led to many basic findings about speech perception, including the importance of cue redundancy to category identification and of differential cue weighting to category formation. Research that grapples with the related problem of binding categories into words for speech processing motivates current neuropsychological work on speech perception. The central focus on the mapping problem in classical theory has also led to an alternative type of acoustic theory, namely, exemplar-based theory. According to this type of acoustic theory, variability is critical for processing talker-specific information during speech processing. The problems associated with mapping acoustic cues to sound categories is not addressed because exemplar-based theories assume that perceptual traces of whole words are perceptual primitives. Smaller units of speech sound representation, as well as the phonology as a whole, are emergent from the word-based representations. Yet, like classical acoustic theories, exemplar-based theories assume that production is mediated by a phonology that has no inherent motor information. The presumed disconnect between acoustic and motor information during perceptual processing distinguishes acoustic theories as a class from other theories of speech perception.

Article

The Acquisition of Color Words  

Katie Wagner and David Barner

Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception. One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.

Article

Acquisition of Inflection in Romance Languages  

Christophe Parisse

Inflection is present in all Romance languages, even if at times it can be replaced by the use of clitic elements. It is therefore a crucial feature of the language for children to acquire. The acquisition of inflected forms was studied in the nominal, verbal, and adjectival systems because it is present from the very first forms produced by children. Data are presented from the literature for six languages: Portuguese, Spanish, Catalan, French, Italian, and Romanian. For all these languages, there exist open access corpus data available on the CHILDES website, which make it possible to have first-hand access to actual spoken data for these languages. Results show that children produce correct forms very early on for the most frequent grammatical elements (by age 2 for most children, but sometimes as early as age 18 months). This includes the use of nouns and determiners in both genders, and the use of verbs in the present, perfect, and imperative forms. Verbs are produced first in the third person, followed by the other persons. Nouns and verbs are used in the singular form before being used in the plural form. Other more complex grammatical forms, such as, for example, the imperfective past tense or the present conditional, emerge only later, and this is probably related to the semantics of the forms rather than their complexity. In most cases, there is correct agreement between noun and determiner, or verb and personal pronoun, or noun and verb. Errors are infrequent, and the nature of the errors can be used as means to study the mechanisms of language acquisition.

Article

Acquisition of Pragmatics  

Myrto Grigoroglou and Anna Papafragou

To become competent communicators, children need to learn that what a speaker means often goes beyond the literal meaning of what the speaker says. The acquisition of pragmatics as a field is the study of how children learn to bridge the gap between the semantic meaning of words and structures and the intended meaning of an utterance. Of interest is whether young children are capable of reasoning about others’ intentions and how this ability develops over time. For a long period, estimates of children’s pragmatic sophistication were mostly pessimistic: early work on a number of phenomena showed that very young communicators were egocentric, oblivious to other interlocutors’ intentions, and overall insensitive to subtle pragmatic aspects of interpretation. Recent years have seen major shifts in the study of children’s pragmatic development. Novel methods and more fine-grained theoretical approaches have led to a reconsideration of older findings on how children acquire pragmatics across a number of phenomena and have produced a wealth of new evidence and theories. Three areas that have generated a considerable body of developmental work on pragmatics include reference (the relation between words or phrases and entities in the world), implicature (a type of inferred meaning that arises when a speaker violates conversational rules), and metaphor (a case of figurative language). Findings from these three domains suggest that children actively use pragmatic reasoning to delimit potential referents for newly encountered words, can take into account the perspective of a communicative partner, and are sensitive to some aspects of implicated and metaphorical meaning. Nevertheless, children’s success with pragmatic communication is fragile and task-dependent.

Article

Audiovisual Speech Perception and the McGurk Effect  

Lawrence D. Rosenblum

Research on visual and audiovisual speech information has profoundly influenced the fields of psycholinguistics, perception psychology, and cognitive neuroscience. Visual speech findings have provided some of most the important human demonstrations of our new conception of the perceptual brain as being supremely multimodal. This “multisensory revolution” has seen a tremendous growth in research on how the senses integrate, cross-facilitate, and share their experience with one another. The ubiquity and apparent automaticity of multisensory speech has led many theorists to propose that the speech brain is agnostic with regard to sense modality: it might not know or care from which modality speech information comes. Instead, the speech function may act to extract supramodal informational patterns that are common in form across energy streams. Alternatively, other theorists have argued that any common information existent across the modalities is minimal and rudimentary, so that multisensory perception largely depends on the observer’s associative experience between the streams. From this perspective, the auditory stream is typically considered primary for the speech brain, with visual speech simply appended to its processing. If the utility of multisensory speech is a consequence of a supramodal informational coherence, then cross-sensory “integration” may be primarily a consequence of the informational input itself. If true, then one would expect to see evidence for integration occurring early in the perceptual process, as well in a largely complete and automatic/impenetrable manner. Alternatively, if multisensory speech perception is based on associative experience between the modal streams, then no constraints on how completely or automatically the senses integrate are dictated. There is behavioral and neurophysiological research supporting both perspectives. Much of this research is based on testing the well-known McGurk effect, in which audiovisual speech information is thought to integrate to the extent that visual information can affect what listeners report hearing. However, there is now good reason to believe that the McGurk effect is not a valid test of multisensory integration. For example, there are clear cases in which responses indicate that the effect fails, while other measures suggest that integration is actually occurring. By mistakenly conflating the McGurk effect with speech integration itself, interpretations of the completeness and automaticity of multisensory may be incorrect. Future research should use more sensitive behavioral and neurophysiological measures of cross-modal influence to examine these issues.

Article

Bilingual Language Processing  

John W. Schwieter

Psycholinguistic approaches to examining bilingualism are relatively recent applications that have emerged in the 20th century. The fact that there are more than 7,000 current languages in the world, with the majority of the population actively using more than one language, offers the opportunity to examine language and cognitive processes in a way that is more reflective of human nature. While it was once believed that exposing infants and children to more than one language could lead to negative consequences for cognition and overall language competence, current evidence shows that this is not the case. Among the many topics studied in psycholinguistics and bilingualism is whether two language systems share an integrated network and overlap in the brain, and how the mind deals with cross-linguistic activation and competition from one language when processing in another. Innovative behavioral, electrophysiological, and neuroscientific methods have significantly elucidated our understanding of these issues. The current state of the psychology and neuroscience of bilingualism finds itself at the crossroads of uncovering a holistic view of how multiple languages are processed and represented in the mind and brain. Current issues, such as exploring the cognitive and neurological consequences of bilingualism, are at the forefront of these discussions.

Article

Blocking  

Franz Rainer

Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking. In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features). Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.

Article

Child Phonology  

Yvan Rose

Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.

Article

Children’s Acquisition of Syntactic Knowledge  

Rosalind Thornton

Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.

Article

Chinese Character Processing  

Xufeng Duan and Zhenguang G. Cai

Chinese characters, as the basic units of the Chinese writing system, encapsulate a deep orthography that requires complex cognitive processing during recognition, naming, and handwriting. Recognition of these characters involves decoding both phonological and orthographic elements, where phonological information plays a crucial role early in the process, despite the inconsistency in orthography-to-phonology conversion. Research suggests that both holistic and sublexical processing strategies are employed, with the effectiveness of each strategy varying based on individual differences and the specific characteristics of the character, such as frequency and structure. Naming a Chinese character extends beyond recognition, necessitating the retrieval and articulation of its phonology. This process is influenced by lexical variables like frequency, age of acquisition, and semantic ambiguity, reflecting the intricate relationship between semantic and phonological information in character naming. The complexity of the Chinese orthography, lacking consistent phoneme–grapheme correspondences, necessitates additional cognitive efforts in naming, particularly for characters with ambiguous semantics or inconsistent phonology. Finally, handwriting Chinese characters involves a combination of central and peripheral processes. The central processes focus on accessing the orthographic makeup of a character, including radicals and strokes, based on phonological and/or semantic input. These orthographic components are stored in working memory and retrieved to create motor plans for the actual act of handwriting, which takes place during the peripheral processes. Various lexical and individual factors can influence these processes. In summary, understanding Chinese character processing illuminates the cognitive complexities of reading and writing in logographic systems, underscoring the interplay between phonological, orthographic, and semantic information. Future research is poised to explore the nuanced dynamics of these processes, especially in the context of evolving digital literacy practices.

Article

Computational Phonology  

Jane Chandlee and Jeffrey Heinz

Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.

Article

Connectionism in Linguistic Theory  

Xiaowei Zhao

Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.

Article

Defectiveness in Morphology  

Antonio Fábregas

Morphological defectiveness refers to situations where one or more paradigmatic forms of a lexeme are not realized, without plausible syntactic, semantic, or phonological causes. The phenomenon tends to be associated with low-frequency lexemes and loanwords. Typically, defectiveness is gradient, lexeme-specific, and sensitive to the internal structure of paradigms. The existence of defectiveness is a challenge to acquisition models and morphological theories where there are elsewhere operations to materialize items. For this reason, defectiveness has become a rich field of research in recent years, with distinct approaches that view it as an item-specific idiosyncrasy, as an epiphenomenal result of rule competition, or as a normal morphological alternation within a paradigmatic space.

Article

Direct Perception of Speech  

Carol A. Fowler

The theory of speech perception as direct derives from a general direct-realist account of perception. A realist stance on perception is that perceiving enables occupants of an ecological niche to know its component layouts, objects, animals, and events. “Direct” perception means that perceivers are in unmediated contact with their niche (mediated neither by internally generated representations of the environment nor by inferences made on the basis of fragmentary input to the perceptual systems). Direct perception is possible because energy arrays that have been causally structured by niche components and that are available to perceivers specify (i.e., stand in 1:1 relation to) components of the niche. Typically, perception is multi-modal; that is, perception of the environment depends on specifying information present in, or even spanning, multiple energy arrays. Applied to speech perception, the theory begins with the observation that speech perception involves the same perceptual systems that, in a direct-realist theory, enable direct perception of the environment. Most notably, the auditory system supports speech perception, but also the visual system, and sometimes other perceptual systems. Perception of language forms (consonants, vowels, word forms) can be direct if the forms lawfully cause specifying patterning in the energy arrays available to perceivers. In Articulatory Phonology, the primitive language forms (constituting consonants and vowels) are linguistically significant gestures of the vocal tract, which cause patterning in air and on the face. Descriptions are provided of informational patterning in acoustic and other energy arrays. Evidence is next reviewed that speech perceivers make use of acoustic and cross modal information about the phonetic gestures constituting consonants and vowels to perceive the gestures. Significant problems arise for the viability of a theory of direct perception of speech. One is the “inverse problem,” the difficulty of recovering vocal tract shapes or actions from acoustic input. Two other problems arise because speakers coarticulate when they speak. That is, they temporally overlap production of serially nearby consonants and vowels so that there are no discrete segments in the acoustic signal corresponding to the discrete consonants and vowels that talkers intend to convey (the “segmentation problem”), and there is massive context-sensitivity in acoustic (and optical and other modalities) patterning (the “invariance problem”). The present article suggests solutions to these problems. The article also reviews signatures of a direct mode of speech perception, including that perceivers use cross-modal speech information when it is available and exhibit various indications of perception-production linkages, such as rapid imitation and a disposition to converge in dialect with interlocutors. An underdeveloped domain within the theory concerns the very important role of longer- and shorter-term learning in speech perception. Infants develop language-specific modes of attention to acoustic speech signals (and optical information for speech), and adult listeners attune to novel dialects or foreign accents. Moreover, listeners make use of lexical knowledge and statistical properties of the language in speech perception. Some progress has been made in incorporating infant learning into a theory of direct perception of speech, but much less progress has been made in the other areas.

Article

Discriminative Learning and the Lexicon: NDL and LDL  

Yu-Ying Chuang and R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.

Article

Experimental Pragmatics  

Florian Schwarz

While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material. One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors. The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction. Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.

Article

Functional Word Acquisition in Mandarin Chinese  

Xiaolu Yang

Functional categories carry little or no semantic content by themselves and contribute crucially to sentence structure. In the generative framework, they are assumed to mark and head functional projections in the basic hierarchical structure underlying each phrase or sentence. Given their intertwining with grammar, child language researchers have long been attracted by the development of functional categories. To a child, it is important to differentiate functional categories from lexical categories and relate each of them to the hidden hierarchical structure of the phrase or sentence. The learning of a functional category is no easy task and implies the development of different dimensions of linguistic knowledge, including the lexical realization of the functional category in the ambient language, the specific grammatical function it serves, the abstract underlying structure, and the semantic properties of the associated structure. A central issue in the acquisition of functional categories concerns whether children have access to functional categories early in language development. Differing accounts have been proposed. According to the maturational view, functional categories are absent in children’s initial grammar and mature later. In contrast to the maturational view is the continuity view, which assumes children’s continuous access to functional categories throughout language development. Cross-linguistic evidence from production and experimental studies has been accumulated in support of the continuity hypothesis. Mandarin Chinese has a rich inventory of function words, though it lacks overt inflectional markers. De, aspect markers, ba, and sentence final particles are among the most commonly used function words and play a fundamental role in sentence structure in Mandarin Chinese in that they are functional categories that head various functional projections in the hierarchical structure. Acquisition studies show that these function words emerge early in development and children’s use of these function words is mostly target-like, offering evidence for the continuity view of functional categories as well as insights into child grammar in Mandarin Chinese.

Article

Hmong-Mien Languages  

David R. Mortensen

Hmong-Mien (also known as Miao-Yao) is a bipartite family of minority languages spoken primarily in China and mainland Southeast Asia. The two branches, called Hmongic and Mienic by most Western linguists and Miao and Yao by Chinese linguists, are both compact groups (phylogenetically if not geographically). Although they are uncontroversially distinct from one another, they bear a strong mutual affinity. But while their internal relationships are reasonably well established, there is no unanimity regarding their wider genetic affiliations, with many Chinese scholars insisting on Hmong-Mien membership in the Sino-Tibetan superfamily, some Western scholars suggesting a relationship to Austronesian and/or Tai-Kradai, and still others suggesting a relationship to Mon-Khmer. A plurality view appears to be that Hmong-Mien bears no special relationship to any surviving language family. Hmong-Mien languages are typical—in many respects—of the non-Sino-Tibetan languages of Southern China and mainland Southeast Asia. However, they possess a number of properties that make them stand out. Many neighboring languages are tonal, but Hmong-Mien languages are, on average, more so (in terms of the number of tones). While some other languages in the area have small-to-medium consonant inventories, Hmong-Mien languages (and especially Hmongic languages) often have very large consonant inventories with rare classes of sounds like uvulars and voiceless sonorants. Furthermore, while many of their neighbors are morphologically isolating, few language groups display as little affixation as Hmong-Mien languages. They are largely head-initial, but they deviate from this generalization in their genitive-noun constructions and their relative clauses (which vary in position and structure, sometimes even within the same language).

Article

Iconicity  

Irit Meir and Oksana Tkachman

Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry. There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world. Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.