Generics are sentences such as Birds fly, which express generalizations. They are prevalent in speech, and as far as is known, no human language lacks generics. Yet, it is very far from clear what they mean. After all, not all birds fly—penguins don’t! There are two general views about the meaning of generics in the literature, and each view encompasses many specific theories. According to the inductivist view, a generic states that a sufficient number of individuals satisfy a certain property—in the example above, it says that sufficiently many birds fly. This view faces the complicated problem of spelling out exactly how many is “sufficiently many” in a way that correctly captures the intuitive truth conditions of generics. An alternative, the rules and regulations view, despairs from this project and proposes instead that generics directly express rules in the world. Rules are taken to be abstract objects, which are not related to the properties of specific individuals. This view faces the difficult problem of explaining how people come to know of such rules when judging the truth of falsity of generics, and accounting for the strong intuition that a sentence such as Birds fly talks about birds, not abstract objects. What seems to be beyond dispute is that generics, even if they do not express rules, are lawlike: they state non-accidental generalizations. Many scholars have taken this fact to indicate that generics are parametric on possible worlds: they refer to worlds other than the actual world. This, again, raises the problem of how people come to know about what happens in these other worlds. However, a rigorous application of standard tests for intensionality shows that generics are not, in fact, parametric on possible worlds, but only on time. This unusual property may explain much of the mystery surrounding generics. Another mysterious property of generics is that although there is no language without them, there is no linguistic construction that is devoted to the expression of genericity. Rather, generics can be expressed in a variety of ways, each of which can also express nongenerics. Yet, each manifestation of generics differs subtly (or sometimes not so subtly) in its meaning from the others. Even when these and other puzzles of genericity are solved, one mystery would remain: Why are generics, which are so easy to produce and understand in conversation, so difficult to analyze?
Holger Diessel and Martin Hilpert
Until recently, theoretical linguists have paid little attention to the frequency of linguistic elements in grammar and grammatical development. It is a standard assumption of (most) grammatical theories that the study of grammar (or competence) must be separated from the study of language use (or performance). However, this view of language has been called into question by various strands of research that have emphasized the importance of frequency for the analysis of linguistic structure. In this research, linguistic structure is often characterized as an emergent phenomenon shaped by general cognitive processes such as analogy, categorization, and automatization, which are crucially influenced by frequency of occurrence. There are many different ways in which frequency affects the processing and development of linguistic structure. Historical linguists have shown that frequent strings of linguistic elements are prone to undergo phonetic reduction and coalescence, and that frequent expressions and constructions are more resistant to structure mapping and analogical leveling than infrequent ones. Cognitive linguists have argued that the organization of constituent structure and embedding is based on the language users’ experience with linguistic sequences, and that the productivity of grammatical schemas or rules is determined by the combined effect of frequency and similarity. Child language researchers have demonstrated that frequency of occurrence plays an important role in the segmentation of the speech stream and the acquisition of syntactic categories, and that the statistical properties of the ambient language are much more regular than commonly assumed. And finally, psycholinguists have shown that structural ambiguities in sentence processing can often be resolved by lexical and structural frequencies, and that speakers’ choices between alternative constructions in language production are related to their experience with particular linguistic forms and meanings. Taken together, this research suggests that our knowledge of grammar is grounded in experience.
Chiyuki Ito and Michael J. Kenstowicz
Typologically, pitch-accent languages stand between stress languages like Spanish and tone languages like Shona, and share properties of both. In a stress language, typically just one syllable per word is accented and bears the major stress (cf. Spanish sábana ‘sheet,’ sabána ‘plain,’ panamá ‘Panama’). In a tone language, the number of distinctions grows geometrically with the size of the word. So in Shona, which contrasts high versus low tone, trisyllabic words have eight possible pitch patterns. In a canonical pitch-accent language such as Japanese, just one syllable (or mora) per word is singled out as distinctive, as in Spanish. Each syllable in the word is assigned a high or low tone (as in Shona); however, this assignment is predictable based on the location of the accented syllable. The Korean dialects spoken in the southeast Kyengsang and northeast Hamkyeng regions retain the pitch-accent distinctions that developed by the period of Middle Korean (15th–16th centuries). For example, in Hamkyeng a three-syllable word can have one of four possible pitch patterns, which are assigned by rules that refer to the accented syllable. The accented syllable has a high tone, and following syllables have low tones. Then the high tone of the accented syllable spreads up to the initial syllable, which is low. Thus, /MUcike/ ‘rainbow’ is realized as high-low-low, /aCImi/ ‘aunt’ is realized as low-high-low, and /menaRI/ ‘parsley’ is realized as low-high-high. An atonic word such as /cintallɛ/ ‘azalea’ has the same low-high-high pitch pattern as ‘parsley’ when realized alone. But the two types are distinguished when combined with a particle such as /MAN/ ‘only’ that bears an underlying accent: /menaRI+MAN/ ‘only parsely’ is realized as low-high-high-low while /cintallɛ+MAN/ ‘only azelea’ is realized as low-high-high-high. This difference can be explained by saying that the underlying accent on the particle is deleted if the stem bears an accent. The result is that only one syllable per word may bear an accent (similar to Spanish). On the other hand, since the accent is realized with pitch distinctions, tonal assimilation rules are prevalent in pitch-accent languages. This article begins with a description of the Middle Korean pitch-accent system and its evolution into the modern dialects, with a focus on Kyengsang. Alternative synchronic analyses of the accentual alternations that arise when a stem is combined with inflectional particles are then considered. The discussion proceeds to the phonetic realization of the contrasting accents, their realizations in compounds and phrases, and the adaptation of loanwords. The final sections treat the lexical restructuring and variable distribution of the pitch accents and their emergence from predictable word-final accent in an earlier stage of Proto-Korean.
Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.
Speakers of most languages comprehend and produce a very large number of morphologically complex words. But how? There is a tension between two facts. First, speakers can comprehend and produce novel words, which they have never experienced and therefore could not have stored in memory. For example, English speakers readily generate the plural form of wug. These novel words often look like they are composed of recognizable parts, such as the plural marker -s. Second, speakers also comprehend and produce many words that cannot be straightforwardly decomposed into parts, such as bought or brunch. Morphology is the paradigm example of a quasi-regular domain, full of only partially productive, exception-ridden patterns, many of which nonetheless appear to be learned and used by speakers and listeners. Quasi-regularity has made morphology a fruitful testing ground for alternative views of how the mind works. Every major approach to the nature of the mind has attempted to tackle morphological processing. These approaches range from symbolic rule-based approaches to connectionist networks of simple neuron-like processing units to clouds of richly specified holistic exemplars. They vary in their assumptions about the nature of mental representations; particularly, those comprising long-term memory of language. They also vary in the computations that the mind is thought to perform; including the computations that are performed by a speaker attempting to produce or comprehend a word. In challenging all major approaches to cognition with its intricate patterns, morphology continues to provide a valuable window onto the nature of the mind.
Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning. In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.
When the phonological form of a morpheme—a unit of meaning that cannot be decomposed further into smaller units of meaning—involves a particular melodic pattern as part of its sound shape, this morpheme is specified for tone. In view of this definition, phrase- and utterance-level melodies—also known as intonation—are not to be interpreted as instances of tone. That is, whereas the question “Tomorrow?” may be uttered with a rising melody, this melody is not tone, because it is not a part of the lexical specification of the morpheme tomorrow. A language that presents morphemes that are specified with specific melodies is called a tone language. It is not the case that in a tone language every morpheme, content word, or syllable would be specified for tone. Tonal specification can be highly restricted within the lexicon. Examples of such sparsely specified tone languages include Swedish, Japanese, and Ekagi (a language spoken in the Indonesian part of New Guinea); in these languages, only some syllables in some words are specified for tone. There are also tone languages where each and every syllable of each and every word has a specification. Vietnamese and Shilluk (a language spoken in South Sudan) illustrate this configuration. Tone languages also vary greatly in terms of the inventory of phonological tone forms. The smallest possible inventory contrasts one specification with the absence of specification. But there are also tone languages with eight or more distinctive tone categories. The physical (acoustic) realization of the tone categories is primarily fundamental frequency (F0), which is perceived as pitch. However, often other phonetic correlates are also involved, in particular voice quality. Tone plays a prominent role in the study of phonology because of its structural complexity. That is, in many languages, the way a tone surfaces is conditioned by factors such as the segmental composition of the morpheme, the tonal specifications of surrounding constituents, morphosyntax, and intonation. On top of this, tone is diachronically unstable. This means that, when a language has tone, we can expect to find considerable variation between dialects, and more of it than in relation to other parts of the sound system.
Gerrit Jan Dimmendaal
Nilo-Saharan, a phylum spread mainly across an area south of the Afro-Asiatic and north of the Niger-Congo phylum, was established as a genetic grouping by Greenberg. In his earlier, continent-wide classification of African languages in articles published between 1949 and 1954, Greenberg had proposed a Macro-Sudanic family (renamed Chari-Nile in subsequent studies), consisting of a Central Sudanic and an Eastern Sudanic branch plus two isolated members, Berta and Kunama. This family formed the core of the Nilo-Saharan phylum as postulated by Greenberg in his The Languages of Africa, where a number of groups were added which had been treated as isolated units in his earlier classificatory work: Songhay, Eastern Saharan (now called Saharan), Maban and Mimi, Nyangian (now called Kuliak or Rub), Temainian (Temeinian), Coman (Koman), and Gumuz. Presenting an “encyclopaedic survey” of morphological structures for the more than 140 languages belonging to this phylum is impossible in such a brief study, also given the tremendous genetic distance between some of the major subgroups. Instead, typological variation in the morphological structure of these genetically-related languages will be central. In concrete terms this involves synchronic and diachronic observations on their formal properties (section 2), followed by an introduction to the nature of derivation, inflection, and compounding properties in Nilo-Saharan (section 3). This traditional compartmentalization has its limits because it misses out on the interaction with lexical structures and morphosyntactic properties in its extant members, as argued in section 4. As pointed out in section 5, language contact also must have played an important role in the geographical spreading of several of these typological properties.
Claudia Marzi and Vito Pirrelli
Over the past decades, psycholinguistic aspects of word processing have made a considerable impact on views of language theory and language architecture. In the quest for the principles governing the ways human speakers perceive, store, access, and produce words, inflection issues have provided a challenging realm of scientific inquiry, and a battlefield for radically opposing views. It is somewhat ironic that some of the most influential cognitive models of inflection have long been based on evidence from an inflectionally impoverished language like English, where the notions of inflectional regularity, (de)composability, predictability, phonological complexity, and default productivity appear to be mutually implied. An analysis of more “complex” inflection systems such as those of Romance languages shows that this mutual implication is not a universal property of inflection, but a contingency of poorly contrastive, nearly isolating inflection systems. Far from presenting minor faults in a solid, theoretical edifice, Romance evidence appears to call into question the subdivision of labor between rules and exceptions, the on-line processing vs. long-term memory dichotomy, and the distinction between morphological processes and lexical representations. A dynamic, learning-based view of inflection is more compatible with this data, whereby morphological structure is an emergent property of the ways inflected forms are processed and stored, grounded in universal principles of lexical self-organization and their neuro-functional correlates.