Number is the category through which languages express information about the individuality, numerosity, and part structure of what we speak about. As a linguistic category it has a morphological, a morphosyntactic, and a semantic dimension, which are variously interrelated across language systems. Number marking can apply to a more or less restricted part of the lexicon of a language, being most likely on personal pronouns and human/animate nouns, and least on inanimate nouns. In the core contrast, number allows languages to refer to ‘many’ through the description of ‘one’; the sets referred to consist of tokens of the same type, but also of similar types, or of elements pragmatically associated with one named individual. In other cases, number opposes a reading of ‘one’ to a reading as ‘not one,’ which includes masses; when the ‘one’ reading is morphologically derived from the ‘not one,’ it is called a singulative. It is rare for a language to have no linguistic number at all, since a ‘one–many’ opposition is typically implied at least in pronouns, where the category of person discriminates the speaker as ‘one.’ Beyond pronouns, number is typically a property of nouns and/or determiners, although it can appear on other word classes by agreement. Verbs can also express part-structural properties of events, but this ‘verbal number’ is not isomorphic to nominal number marking. Many languages allow a variable proportion of their nominals to appear in a ‘general’ form, which expresses no number information. The main values of number-marked elements are singular and plural; dual and a much rarer trial also exist. Many languages also distinguish forms interpreted as paucals or as greater plurals, respectively, for small and usually cohesive groups and for generically large ones. A broad range of exponence patterns can express these contrasts, depending on the morphological profile of a language, from word inflections to freestanding or clitic forms; certain choices of classifiers also express readings that can be described as ‘plural,’ at least in certain interpretations. Classifiers can co-occur with other plurality markers, but not when these are obligatory as expressions of an inflectional paradigm, although this is debated, partly because the notion of classifier itself subsumes distinct phenomena. Many languages, especially those with classifiers, encode number not as an inflectional category, but through word-formation operations that express readings associated with plurality, including large size. Current research on number concerns all its morphological, morphosyntactic, and semantic dimensions, in particular the interrelations of them as part of the study of natural language typology and of the formal analysis of nominal phrases. The grammatical and semantic function of number and plurality are particularly prominent in formal semantics and in syntactic theory.
Article
Number in Language
Paolo Acquaviva
Article
Phonological and Morphological Aspects of Reduplication
Suzanne Urbanczyk
Reduplication is a word-formation process in which all or part of a word is repeated to convey some form of meaning. A wide range of patterns are found in terms of both the form and meaning expressed by reduplication, making it one of the most studied phenomenon in phonology and morphology. Because the form always varies, depending on the base to which it is attached, it raises many issues such as the nature of the repetition mechanism, how to represent reduplicative morphemes, and whether or not a unified approach can be proposed to account for the full range of patterns.
Article
Psycholinguistic Approaches to Morphology: Theoretical Issues
Christina L. Gagné
Psycholinguistics is the study of how language is acquired, represented, and used by the human mind; it draws on knowledge about both language and cognitive processes. A central topic of debate in psycholinguistics concerns the balance between storage and processing. This debate is especially evident in research concerning morphology, which is the study of word structure, and several theoretical issues have arisen concerning the question of how (or whether) morphology is represented and what function morphology serves in the processing of complex words. Five theoretical approaches have emerged that differ substantially in the emphasis placed on the role of morphemic representations during the processing of morphologically complex words. The first approach minimizes processing by positing that all words, even morphologically complex ones, are stored and recognized as whole units, without the use of morphemic representations. The second approach posits that words are represented and processed in terms of morphemic units. The third approach is a mixture of the first two approaches and posits that a whole-access route and decomposition route operate in parallel. A fourth approach posits that both whole word representations and morphemic representations are used, and that these two types of information interact. A fifth approach proposes that morphology is not explicitly represented, but rather, emerges from the co-activation of orthographic/phonological representations and semantic representations. These competing approaches have been evaluated using a wide variety of empirical methods examining, for example, morphological priming, the role of constituent and word frequency, and the role of morphemic position. For the most part, the evidence points to the involvement of morphological representations during the processing of complex words. However, the specific way in which these representations are used is not yet fully known.
Article
Suppletion
Ljuba N. Veselinova
The term suppletion is used to indicate the unpredictable encoding of otherwise regular semantic or grammatical relations. Standard examples in English include the present and past tense of the verb go, cf. go vs. went, or the comparative and superlative forms of adjectives such as good or bad, cf. good vs. better vs. best, or bad vs. worse vs. worst.
The complementary distribution of different forms to express a paradigmatic contrast has been noticed already in early grammatical traditions. However, the idea that a special form would supply missing forms in a paradigm was first introduced by the neogrammarian Hermann Osthoff, in his work of 1899. The concept of suppletion was consolidated in modern linguistics by Leonard Bloomfield, in 1926. Since then, the notion has been applied to both affixes and stems. In addition to the application of the concept to linguistic units of varying morpho-syntactic status, such as affixes, or stems of different lexical classes such as, for instance, verbs, adjectives, or nouns, the student should also be prepared to encounter frequent discrepancies between uses of the concept in the theoretical literature and its application in more descriptively oriented work. There are models in which the term suppletion is restricted to exceptions to inflectional patterns only; consequently, exceptions to derivational patterns are not accepted as instantiations of the phenomenon. Thus, the comparative degrees of adjectives will be, at best, less prototypical examples of suppletion.
Treatments of the phenomenon vary widely, to the point of being complete opposites. A strong tendency exists to regard suppletion as an anomaly, a historical artifact, and generally of little theoretical interest. A countertendency is to view the phenomenon as challenging, but nonetheless very important for adequate theory formation. Finally, there are scholars who view suppletion as a functionally motivated result of language change.
For a long time, the database on suppletion, similarly to many other phenomena, was restricted to Indo-European languages. With the solidifying of wider cross-linguistic research and linguistic typology since the 1990s, the database on suppletion has been substantially extended. Large-scale cross-linguistic studies have shown that the phenomenon is observed in many different languages around the globe. In addition, it appears as a systematic cross-linguistic phenomenon in that it can be correlated with well-defined language areas, language families, specific lexemic groups, and specific slots in paradigms. The latter can be shown to follow general markedness universals. Finally, the lexemes that show suppletion tend to have special functions in both lexicon and grammar.
Article
The Tangkic Languages of Australia: Phonology and Morphosyntax of Lardil, Kayardild, and Yukulta
Erich R. Round
The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.
Article
Korean Phonetics and Phonology
Young-mee Yu Cho
Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.
Article
Polysynthesis: A Diachronic and Typological Perspective
Michael Fortescue
Polysynthesis is informally understood as the packing of a large number of morphemes into single words, as in (1) from Bininj Gun-wok (Evans, in press).1)
a-ban-yawoyʔ-wargaʔ-maɳe-gaɲ-giɲe-ŋ
1SGSUBJ-3PLOBJ-again-wrong-BEN-meat-cook-PSTPF
'I cooked the wrong meat for them again.'
Its status as a distinct typological category into which some of the world’s languages fall, on a par with isolating, agglutinating, or fusional languages, has been controversial from the start. Nevertheless, researchers working with these languages are seldom in doubt as to their status as distinct from these other morphological types. This has been complicated by the fact that the speakers of such languages are largely limited to hunter-gatherers—or were so in the not too distant past—so the temptation is to link the phenomenon directly to way of life. This proves to be oversimplified, although it is certainly true that languages qualifying as polysynthetic are almost everywhere spoken in peripheral regions and are on the decline in the modern world—few children are learning them today.
Perhaps the most pervasive of the traits that give these languages the impression of a “special” status is that of holophrasis, which can be defined as the (possible) expression of what in less synthetic languages would be whole sentences in single complex (usually verbal) words. It turns out, however, that there is much greater variety among polysynthetic languages than is generally thought: there are few other traits that they all share, although distinct subtypes can in fact be distinguished, notably the affixing as opposed to the incorporating type.
These languages have considerable importance for the investigation of the diachronic complexification of languages in general and of language acquisition by children, as well as for theories of language universals. The sociolinguistic factors behind their development have only recently begun to be studied in depth. All polysynthetic languages today are to some degree endangered (they are dying off at an alarming rate), and many have been poorly studied if at all, which makes their investigation before it is too late a prime goal for linguistics.
Article
Blocking
Franz Rainer
Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.
Article
Language Contact in the Sahara
Lameen Souag
As might be expected from the difficulty of traversing it, the Sahara Desert has been a fairly effective barrier to direct contact between its two edges; trans-Saharan language contact is limited to the borrowing of non-core vocabulary, minimal from south to north and mostly mediated by education from north to south. Its own inhabitants, however, are necessarily accustomed to travelling desert spaces, and contact between languages within the Sahara has often accordingly had a much greater impact. Several peripheral Arabic varieties of the Sahara retain morphology as well as vocabulary from the languages spoken by their speakers’ ancestors, in particular Berber in the southwest and Beja in the southeast; the same is true of at least one Saharan Hausa variety. The Berber languages of the northern Sahara have in turn been deeply affected by centuries of bilingualism in Arabic, borrowing core vocabulary and some aspects of morphology and syntax. The Northern Songhay languages of the central Sahara have been even more profoundly affected by a history of multilingualism and language shift involving Tuareg, Songhay, Arabic, and other Berber languages, much of which remains to be unraveled. These languages have borrowed so extensively that they retain barely a few hundred core words of Songhay vocabulary; those loans have not only introduced new morphology but in some cases replaced old morphology entirely. In the southeast, the spread of Arabic westward from the Nile Valley has created a spectrum of varieties with varying degrees of local influence; the Saharan ones remain almost entirely undescribed. Much work remains to be done throughout the region, not only on identifying and analyzing contact effects but even simply on describing the languages its inhabitants speak.
Article
Frequency Effects in Grammar
Holger Diessel and Martin Hilpert
Until recently, theoretical linguists have paid little attention to the frequency of linguistic elements in grammar and grammatical development. It is a standard assumption of (most) grammatical theories that the study of grammar (or competence) must be separated from the study of language use (or performance). However, this view of language has been called into question by various strands of research that have emphasized the importance of frequency for the analysis of linguistic structure. In this research, linguistic structure is often characterized as an emergent phenomenon shaped by general cognitive processes such as analogy, categorization, and automatization, which are crucially influenced by frequency of occurrence.
There are many different ways in which frequency affects the processing and development of linguistic structure. Historical linguists have shown that frequent strings of linguistic elements are prone to undergo phonetic reduction and coalescence, and that frequent expressions and constructions are more resistant to structure mapping and analogical leveling than infrequent ones. Cognitive linguists have argued that the organization of constituent structure and embedding is based on the language users’ experience with linguistic sequences, and that the productivity of grammatical schemas or rules is determined by the combined effect of frequency and similarity. Child language researchers have demonstrated that frequency of occurrence plays an important role in the segmentation of the speech stream and the acquisition of syntactic categories, and that the statistical properties of the ambient language are much more regular than commonly assumed. And finally, psycholinguists have shown that structural ambiguities in sentence processing can often be resolved by lexical and structural frequencies, and that speakers’ choices between alternative constructions in language production are related to their experience with particular linguistic forms and meanings. Taken together, this research suggests that our knowledge of grammar is grounded in experience.