You are looking at 201-220 of 241 articles
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
The coexistence over centuries of Romance-speaking and Semitic peoples in the Mediterranean area has led to reciprocal linguistic influence and contact phenomena. All Romance languages have been involved in this process, although some of them, such as Romanian, only superficially and indirectly. For what concerns the Semitic counterparts, the major role has been played by Arabic in the Middle Ages, not only in Moorish Spain (711–1492) and in the Emirate of Sicily (831–1072), where interlinguistic contact was daily and intense, but also in Provence and in the main ports of Continental Italy (Pisa, Genoa, Venice), thanks to their commercial relations with the Eastern Mediterranean and North Africa. In addition, a consistent amount of Arabic intellectual lexicon has entered the Romance languages through the translations of scientific treaties, generally through the mediation of Latin. Hebrew has also given a significant contribution, both via the translations of the Bible and, from the Late Middle Ages on, through the oral interaction of the local Jewish communities with non-Jews. In both cases contact has been indirect, since biblical loanwords have been mediated first by Greek and later by Latin, whereas the rest of the borrowings has been transmitted by the Judeo-Romance languages. The other Semitic languages have had no influence on Romance, except for a very limited number of Amharic loanwords to be found in Italian, as a consequence of Italian Colonialism in East Africa (1882–1936).
Although Medieval Spanish and Sicilian display traces of Arabic interference at all levels, including phonology, morphology and syntax, in most Romance languages effects of contact with Semitic are limited to lexicon. These comprise both direct borrowings and structural calques, as far as Arabic and—to a lesser extent—Hebrew are concerned, and pertain to several semantic ambits, such as trade, anatomy, astronomy, and botany (Arabic); religious rituals and practices (Hebrew); and, more generally, daily life, especially in peculiar sociolects and slangs. A case apart is represented by Maltese, a Western Arabic dialect deeply influenced by Italo-Romance (notably Sicilian) from the Middle Ages until the first half of the 20th century, which is a unique example of Romance-Semitic mixing not only at a lexical, but also at a phonological and morphosyntactic level.
Veneeta Dayal and Deepak Alok
Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…tj…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.
Scrambling is one of the most widely discussed and prominent factors affecting word order variation in Korean. Scrambling in Korean exhibits various syntactic and semantic properties that cannot be subsumed under the standard A/A'-movement. Clause-external scrambling as well as clause-internal scrambling in Korean show mixed A/A'-effects in a range of tests such as anaphor binding, weak crossover, Condition C, negative polarity item licensing, wh-licensing, and scopal interpretation. VP-internal scrambling, by contrast, is known to be lack of reconstruction effects conforming to the claim that short scrambling is A-movement. Clausal scrambling, on the other hand, shows total reconstructions effects, unlike phrasal scrambling. The diverse properties of Korean scrambling have received extensive attention in the literature. Some studies argue that scrambling is a type of feature-driven A-movement with special reconstruction effects. Others argue that scrambling can be A-movement or A'-movement depending on the landing site. Yet others claim that scrambling is not standard A/A'-movement, but must be treated as cost-free movement with optional reconstruction effects. Each approach, however, faces non-trivial empirical and theoretical challenges, and further study is needed to understand the complex nature of scrambling. As the theory develops in the Minimalist Program, a variety of proposals have also been advanced to capture properties of scrambling without resorting to A/A'-distinctions.
Scrambling in Korean applies optionally but not randomly. It may be blocked due to various factors in syntax and its interfaces in the grammar. At the syntax proper, scrambling obeys general constraints on movement (e.g., island conditions, left branch condition, coordinate structure condition, proper binding condition, ban on string vacuous movement). Various semantic and pragmatic factors (e.g., specificity, presuppositionality, topic, focus) also play a crucial role in acceptability of sentences with scrambling. Moreover, current studies show that certain instances of scrambling are filtered out at the interface due to cyclic Spell-out and linearization, which strengthens the claim that scrambling is not a free option. Data from Korean pose important challenges against base-generation approaches to scrambling, and lend further credence to the view that scrambling is an instance of movement. The exact nature of scrambling in Korean—whether it is cost-free or feature-driven—must be further investigated in future research, however. The research on Korean scrambling leads us to the pursuit of a general theory, which covers obligatory A/A'-movement as well as optional displacement with mixed semantic effects in languages with free word order.
Empirical and theoretical research on language has recently experienced a period of extensive growth. Unfortunately, however, in the case of the Japanese language, far fewer studies—particularly those written in English—have been presented on adult second language (L2) learners and bilingual children. As the field develops, it is increasingly important to integrate theoretical concepts and empirical research findings in second language acquisition (SLA) of Japanese, so that the concepts and research can be eventually applied to educational practice. This article attempts to: (a) address at least some of the gaps currently existing in the literature, (b) deal with important topics to the extent possible, and (c) discuss various problems with regard to adult learners of Japanese as an L2 and English–Japanese bilingual children. Specifically, the article first examines the characteristics of the Japanese language. Tracing the history of SLA studies, this article then deliberately touches on a wide spectrum of domains of linguistic knowledge (e.g., phonology and phonetics, morphology, lexicon, semantics, syntax, discourse), context of language use (e.g., interactive conversation, narrative), research orientations (e.g., formal linguistics, psycholinguistics, social psychology, sociolinguistics), and age groups (e.g., children, adults). Finally, by connecting past SLA research findings in English and recent/present concerns in Japanese as SLA with a focus on the past 10 years including corpus linguistics, this article provides the reader with an overview of the field of Japanese linguistics and its critical issues.
The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.
The distinction between representations and processes is central to most models of the cognitive science of language. Linguistic theory informs the types of representations assumed, and these representations are what are taken to be the targets of second language acquisition. Epistemologically, this is often taken to be knowledge, or knowledge-that. Techniques such as Grammaticality Judgment tasks are paradigmatic as we seek to gain insight into what a learner’s grammar looks like. Learners behave as if certain phonological, morphological, or syntactic strings (which may or may not be target-like) were well-formed. It is the task of the researcher to understand the nature of the knowledge that governs those well-formedness beliefs.
Traditional accounts of processing, on the other hand, look to the real-time use of language, either in production or perception, and invoke discussions of skill or knowledge-how. A range of experimental psycholinguistic techniques have been used to assess these skills: self-paced reading, eye-tracking, ERPs, priming, lexical decision, AXB discrimination, and the like. Such online measures can show us how we “do” language when it comes to activities such as production or comprehension.
There has long been a connection between linguistic theory and theories of processing as evidenced by the work of Berwick (The Grammatical Basis of Linguistic Performance). The task of the parser is to assign abstract structure to a phonological, morphological, or syntactic string; structure that does not come directly labeled in the acoustic input. Such processing studies as the Garden Path phenomenon have revealed that grammaticality and processability are distinct constructs.
In some models, however, the distinction between grammar and processing is less distinct. Phillips says that “parsing is grammar,” while O’Grady builds an emergentist theory with no grammar, only processing. Bayesian models of acquisition, and indeed of knowledge, assume that the grammars we set up are governed by a principle of entropy, which governs other aspects of human behavior; knowledge and skill are combined. Exemplar models view the processing of the input as a storing of all phonetic detail that is in the environment, not storing abstract categories; the categories emerge via a process of comparing exemplars.
Linguistic theory helps us to understand the processing of input to acquire new L2 representations, and the access of those representations in real time.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
From a typological perspective, the phoneme inventories of Romance languages are of medium size: For instance, most consonant systems contain between 20 and 23 phonemes. An innovation with respect to Latin is the appearance of palatal and palato-alveolar consonants such as /ɲ ʎ/ (Italian, Spanish, Portuguese), /ʃ ʒ/ (French, Portuguese), and /tʃ dʒ/ (Italian, Romanian); a few varieties (e.g., Romansh and a number of Italian dialects) also show the palatal stops /c ɟ/. Besides palatalization, a number of lenition processes (both sonorization and spirantization) have characterized the diachronic development of plosives in Western Romance languages (cf. the French word chèvre “goat” < lat. CĀPRA(M)). Diachronically, both sonorization and spirantization occurred in postvocalic position, where the latter can still be observed as an allophonic rule in present-day Spanish and Sardinian. Sonorization, on the other hand, occurs synchronically after nasals in many southern Italian dialects.
The most fundamental change in the diachrony of the Romance vowel systems derives from the demise of contrastive Latin vowel quantity. However, some Raeto-Romance and northern Italo-Romance varieties have developed new quantity contrasts. Moreover, standard Italian displays allophonic vowel lengthening in open stressed syllables (e.g., /ˈka.ne/ “dog” → [ˈkaːne]. The stressed vowel systems of most Romance varieties contain either five phonemes (Spanish, Sardinian, Sicilian) or seven phonemes (Portuguese, Catalan, Italian, Romanian). Larger vowel inventories are typical of “northern Romance” and appear in dialects of Northern Italy as well as in Raeto- and Gallo-Romance languages. The most complex vowel system is found in standard French with its 16 vowel qualities, comprising the 3 rounded front vowels /y ø œ/ and the 4 nasal vowel phonemes /ɑ̃ ɔ̃ ɛ̃ œ̃/.
Romance languages differ in their treatment of unstressed vowels. Whereas Spanish displays the same five vowels /i e a o u/ in both stressed and unstressed syllables (except for unstressed /u/ in word-final position), many southern Italian dialects have a considerably smaller inventory of unstressed vowels as opposed to their stressed vowels.
The phonotactics of most Romance languages is strongly determined by their typological character as “syllable languages.” Indeed, the phonological word only plays a minor role as very few phonological rules or phonotactic constraints refer, for example, to the word-initial position (such as Italian consonant doubling or the distribution of rhotics in Ibero-Romance), or to the word-final position (such as obstruent devoicing in Raeto-Romance). Instead, a wide range of assimilation and lenition processes apply across word boundaries in French, Italian, and Spanish.
In line with their fundamental typological nature, Romance languages tend to allow syllable structures of only moderate complexity. Inventories of syllable types are smaller than, for example, those of Germanic languages, and the segmental makeup of syllable constituents mostly follows universal preferences of sonority sequencing. Moreover, many Romance languages display a strong preference for open syllables as reflected in the token frequency of syllable types. Nevertheless, antagonistic forces aiming at profiling the prominence of stressed syllables are visible in several Romance languages as well. Within the Ibero- Romance domain, more complex syllable structures and vowel reduction processes are found in the periphery, that is, in Catalan and Portuguese. Similarly, northern Italian and Raeto-Romance dialects have experienced apocope and/or syncope of unstressed vowels, yielding marked syllable structures in terms of both constituent complexity and sonority sequencing.
Elizabeth Closs Traugott
Traditional approaches to semantic change typically focus on outcomes of meaning change and list types of change such as metaphoric and metonymic extension, broadening and narrowing, and the development of positive and negative meanings. Examples are usually considered out of context, and are lexical members of nominal and adjectival word classes.
However, language is a communicative activity that is highly dependent on context, whether that of the ongoing discourse or of social and ideological changes. Much recent work on semantic change has focused, not on results of change, but on pragmatic enabling factors for change in the flow of speech. Attention has been paid to the contributions of cognitive processes, such as analogical thinking, production of cues as to how a message is to be interpreted, and perception or interpretation of meaning, especially in grammaticalization. Mechanisms of change such as metaphorization, metonymization, and subjectification have been among topics of special interest and debate. The work has been enabled by the fine-grained approach to contextual data that electronic corpora allow.
Francis Jeffry Pelletier
Most linguists have heard of semantic compositionality. Some will have heard that it is the fundamental truth of semantics. Others will have been told that it is so thoroughly and completely wrong that it is astonishing that it is still being taught. The present article attempts to explain all this. Much of the discussion of semantic compositionality takes place in three arenas that are rather insulated from one another: (a) philosophy of mind and language, (b) formal semantics, and (c) cognitive linguistics and cognitive psychology. A truly comprehensive overview of the writings in all these areas is not possible here. However, this article does discuss some of the work that occurs in each of these areas. A bibliography of general works, and some Internet resources, will help guide the reader to some further, undiscussed works (including further material in all three categories).
Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler
Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.
Diane Brentari, Jordan Fenlon, and Kearsy Cormier
Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.
Klaus Beyer and Henning Schreiber
The Social Network Analysis approach (SNA), also known as sociometrics or actor-network analysis, investigates social structure on the basis of empirically recorded social ties between actors. It thereby aims to explain e.g. the processes of flow of information, spreading of innovations, or even pathogens throughout the network by actor roles and their relative positions in the network based on quantitative and qualitative analyses. While the approach has a strong mathematical and statistical component, the identification of pertinent social ties also requires a strong ethnographic background. With regard to social categorization, SNA is well suited as a bootstrapping technique for highly dynamic communities and under-documented contexts. Currently, SNA is widely applied in various academic fields. For sociolinguists, it offers a framework for explaining the patterning of linguistic variation and mechanisms of language change in a given speech community.
The social tie perspective developed around 1940, in the field of sociology and social anthropology based on the ideas of Simmel, and was applied later in fields such as innovation theory. In sociolinguistics, it is strongly connected to the seminal work of Lesley and James Milroy and their Belfast studies (1978, 1985). These authors demonstrate that synchronic speaker variation is not only governed by broad societal categories but is also a function of communicative interaction between speakers. They argue that the high level of resistance against linguistic change in the studied community is a result of strong and multiplex ties between the actors. Their approach has been followed by various authors, including Gal, Lippi-Green, and Labov, and discussed for a variety of settings; most of them, however, are located in the Western world.
The methodological advantages could make SNA the preferred framework for variation studies in Africa due to the prevailing dynamic multilingual conditions, often on the backdrop of less standardized languages. However, rather few studies using SNA as a framework have yet been conducted. This is possibly due to the quite demanding methodological requirements, the overall effort, and the often highly complex linguistic backgrounds. A further potential obstacle is the pace of theoretical development in SNA. Since its introduction to sociolinguistics, various new measures and statistical techniques have been developed by the fast growing SNA community. Receiving this vast amount of recent literature and testing new concepts is likewise a challenge for the application of SNA in sociolinguistics.
Nevertheless, the overall methodological effort of SNA has been much reduced by the advancements in recording technology, data processing, and the introduction of SNA software (UCINET) and packages for network statistics in R (‘sna’). In the field of African sociolinguistics, a more recent version of SNA has been implemented in a study on contact-induced variation and change in Pana and Samo, two speech communities in the Northwest of Burkina Faso. Moreover, further enhanced applications are on the way for Senegal and Cameroon, and even more applications in the field of African languages are to be expected.
The study of sociolinguistics constitutes a vast and complex topic that has yielded an extensive and multifaceted body of scholarship. Language is fundamentally at work in how we operate as individuals, as members of various communities, and within cultures and societies. As speakers, we learn not only the structure of a given language; we also learn cultural and social norms about how to use language and what content to communicate. We use language to navigate expectations, to engage in interpersonal interaction, and to go along with or to speak out against social structures and systems.
Sociolinguistics aims to study the effects of language use within and upon societies and the reciprocal effects of social organization and social contexts on language use. In contemporary theoretical perspectives, sociolinguists view language and society as being mutually constitutive: each influences the other in ways that are inseparable and complex. Language is imbued with and carries social, cultural, and personal meaning. Through the use of linguistic markers, speakers symbolically define self and society. Simply put, language is not merely content; rather, it is something that we do, and it affects how we act and interact as social beings in the world.
Language is a social product with rich variation along individual, community, cultural, and societal lines. For this reason, context matters in sociolinguistic research. Social categories such as gender, race/ethnicity, social class, nationality, etc., are socially constructed, with considerable variation within and among categories. Attributes such as “female” or “upper class” do not have universal effects on linguistic behavior, and sociolinguists cannot assume that the most interesting linguistic differences will be between groups of speakers in any simple, binary fashion. Sociolinguistic research thus aims to explore social and linguistic diversity in order to better understand how we, as speakers, use language to inhabit and negotiate our many personal, cultural, and social identities and roles.
Spanish in Contact with South-American Languages, with Special Emphasis on Andean and Paraguayan Spanish
The effect of indigenous languages of South America on Spanish is strongest in the lexicon (especially with toponyms, zoonyms, and phytonyms) and identifiable, but much more modest, in phonetics/phonology (e.g., vowel variability and reduction and nasalization) and morphosyntax (e.g., the different use of selected verb forms and constituent order). The phenomena called Media Lengua and Yopará differ from this picture in that the former roughly consists of a Spanish lexicon combined with Quechua grammar, while the latter is a fluid Guaraní-based system with numerous borrowings from Spanish. The effects of contact are socially and areally variable, with low-prestige, typically rural, varieties of South American Spanish showing the most significant systemic impact, while high-prestige, typically urban, varieties (including the national standards) show little more than lexical borrowings in the semantic fields mentioned. This result is hardly surprising, due to historical/sociolinguistic factors (which often led to situations of dominance and language shift) and to the typological dissimilarities between Spanish and the indigenous languages (which typically hinders borrowing, especially of morphological elements).
Speech acts are acts that can, but need not, be carried out by saying and meaning that one is doing so. Many view speech acts as the central units of communication, with phonological, morphological, syntactic, and semantic properties of an utterance serving as ways of identifying whether the speaker is making a promise, a prediction, a statement, or a threat. Some speech acts are momentous, since an appropriate authority can, for instance, declare war or sentence a defendant to prison, by saying that he or she is doing so. Speech acts are typically analyzed into two distinct components: a content dimension (corresponding to what is being said), and a force dimension (corresponding to how what is being said is being expressed). The grammatical mood of the sentence used in a speech act signals, but does not uniquely determine, the force of the speech act being performed. A special type of speech act is the performative, which makes explicit the force of the utterance. Although it has been famously claimed that performatives such as “I promise to be there on time” are neither true nor false, current scholarly consensus rejects this view. The study of so-called infelicities concerns the ways in which speech acts might either be defective (say by being insincere) or fail completely.
Recent theorizing about speech acts tends to fall either into conventionalist or intentionalist traditions: the former sees speech acts as analogous to moves in a game, with such acts being governed by rules of the form “doing A counts as doing B”; the latter eschews game-like rules and instead sees speech acts as governed by communicative intentions only. Debate also arises over the extent to which speakers can perform one speech act indirectly by performing another. Skeptics about the frequency of such events contend that many alleged indirect speech acts should be seen instead as expressions of attitudes. New developments in speech act theory also situate them in larger conversational frameworks, such as inquiries, debates, or deliberations made in the course of planning. In addition, recent scholarship has identified a type of oppression against under-represented groups as occurring through “silencing”: a speaker attempts to use a speech act to protect her autonomy, but the putative act fails due to her unjust milieu.
Kodi Weatherholtz and T. Florian Jaeger
The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.
Beata Moskal and Peter W. Smith
Headedness is a pervasive phenomenon throughout different components of the grammar, which fundamentally encodes an asymmetry between two or more items, such that one is in some sense more important than the other(s). In phonology for instance, the nucleus is the head of the syllable, and not the onset or the coda, whereas in syntax, the verb is the head of a verb phrase, rather than any complements or specifiers that it combines with. It makes sense, then, to question whether the notion of headedness applies to the morphology as well; specifically, do words—complex or simplex—have heads that determine the properties of the word as a whole? Intuitively it makes sense that words have heads: a noun that is derived from an adjective like redness can function only as a noun, and the presence of red in the structure does not confer on the whole form the ability to function as an adjective as well.
However, this question is a complex one for a variety of reasons. While it seems clear for some phenomena such as category determination that words have heads, there is a lot of evidence to suggest that the properties of complex words are not all derived from one morpheme, but rather that the features are gathered from potentially numerous morphemes within the same word. Furthermore, properties that characterize heads compared to dependents, particularly based on syntactic behavior, do not unambigously pick out a single element, but the tests applied to morphology at times pick out affixes, and at times pick out bases as the head of the whole word.
Ljuba N. Veselinova
The term suppletion is used to indicate the unpredictable encoding of otherwise regular semantic or grammatical relations. Standard examples in English include the present and past tense of the verb go, cf. go vs. went, or the comparative and superlative forms of adjectives such as good or bad, cf. good vs. better vs. best, or bad vs. worse vs. worst.
The complementary distribution of different forms to express a paradigmatic contrast has been noticed already in early grammatical traditions. However, the idea that a special form would supply missing forms in a paradigm was first introduced by the neogrammarian Hermann Osthoff, in his work of 1899. The concept of suppletion was consolidated in modern linguistics by Leonard Bloomfield, in 1926. Since then, the notion has been applied to both affixes and stems. In addition to the application of the concept to linguistic units of varying morpho-syntactic status, such as affixes, or stems of different lexical classes such as, for instance, verbs, adjectives, or nouns, the student should also be prepared to encounter frequent discrepancies between uses of the concept in the theoretical literature and its application in more descriptively oriented work. There are models in which the term suppletion is restricted to exceptions to inflectional patterns only; consequently, exceptions to derivational patterns are not accepted as instantiations of the phenomenon. Thus, the comparative degrees of adjectives will be, at best, less prototypical examples of suppletion.
Treatments of the phenomenon vary widely, to the point of being complete opposites. A strong tendency exists to regard suppletion as an anomaly, a historical artifact, and generally of little theoretical interest. A countertendency is to view the phenomenon as challenging, but nonetheless very important for adequate theory formation. Finally, there are scholars who view suppletion as a functionally motivated result of language change.
For a long time, the database on suppletion, similarly to many other phenomena, was restricted to Indo-European languages. With the solidifying of wider cross-linguistic research and linguistic typology since the 1990s, the database on suppletion has been substantially extended. Large-scale cross-linguistic studies have shown that the phenomenon is observed in many different languages around the globe. In addition, it appears as a systematic cross-linguistic phenomenon in that it can be correlated with well-defined language areas, language families, specific lexemic groups, and specific slots in paradigms. The latter can be shown to follow general markedness universals. Finally, the lexemes that show suppletion tend to have special functions in both lexicon and grammar.
Ur Shlonsky and Giuliano Bocci
Syntactic cartography emerged in the 1990s as a result of the growing consensus in the field about the central role played by functional elements and by morphosyntactic features in syntax. The declared aim of this research direction is to draw maps of the structures of syntactic constituents, characterize their functional structure, and study the array and hierarchy of syntactically relevant features. Syntactic cartography has made significant empirical discoveries, and its methodology has been very influential in research in comparative syntax and morphosyntax. A central theme in current cartographic research concerns the source of the emerging featural/structural hierarchies. The idea that the functional hierarchy is not a primitive of Universal Grammar but derives from other principles does not undermine the scientific relevance of the study of the cartographic structures. On the contrary, the cartographic research aims at providing empirical evidence that may help answer these questions about the source of the hierarchy and shed light on how the computational principles and requirements of the interface with sound and meaning interact.