Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.
Article
Blocking
Franz Rainer
Article
Computational Phonology
Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology.
The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.
Article
Connectionism in Linguistic Theory
Xiaowei Zhao
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Article
Hmong-Mien Languages
David R. Mortensen
Hmong-Mien (also known as Miao-Yao) is a bipartite family of minority languages spoken primarily in China and mainland Southeast Asia. The two branches, called Hmongic and Mienic by most Western linguists and Miao and Yao by Chinese linguists, are both compact groups (phylogenetically if not geographically). Although they are uncontroversially distinct from one another, they bear a strong mutual affinity. But while their internal relationships are reasonably well established, there is no unanimity regarding their wider genetic affiliations, with many Chinese scholars insisting on Hmong-Mien membership in the Sino-Tibetan superfamily, some Western scholars suggesting a relationship to Austronesian and/or Tai-Kradai, and still others suggesting a relationship to Mon-Khmer. A plurality view appears to be that Hmong-Mien bears no special relationship to any surviving language family.
Hmong-Mien languages are typical—in many respects—of the non-Sino-Tibetan languages of Southern China and mainland Southeast Asia. However, they possess a number of properties that make them stand out. Many neighboring languages are tonal, but Hmong-Mien languages are, on average, more so (in terms of the number of tones). While some other languages in the area have small-to-medium consonant inventories, Hmong-Mien languages (and especially Hmongic languages) often have very large consonant inventories with rare classes of sounds like uvulars and voiceless sonorants. Furthermore, while many of their neighbors are morphologically isolating, few language groups display as little affixation as Hmong-Mien languages. They are largely head-initial, but they deviate from this generalization in their genitive-noun constructions and their relative clauses (which vary in position and structure, sometimes even within the same language).
Article
Innateness of Language
Yarden Kedar
A fundamental question in epistemological philosophy is whether reason may be based on a priori knowledge—that is, knowledge that precedes and which is independent of experience. In modern science, the concept of innateness has been associated with particular behaviors and types of knowledge, which supposedly have been present in the organism since birth (in fact, since fertilization)—prior to any sensory experience with the environment.
This line of investigation has been traditionally linked to two general types of qualities: the first consists of instinctive and inflexible reflexes, traits, and behaviors, which are apparent in survival, mating, and rearing activities. The other relates to language and cognition, with certain concepts, ideas, propositions, and particular ways of mental computation suggested to be part of one’s biological make-up. While both these types of innatism have a long history (e.g., debate by Plato and Descartes), some bias appears to exist in favor of claims for inherent behavioral traits, which are typically accepted when satisfactory empirical evidence is provided. One famous example is Lorenz’s demonstration of imprinting, a natural phenomenon that obeys a predetermined mechanism and schedule (incubator-hatched goslings imprinted on Lorenz’s boots, the first moving object they encountered). Likewise, there seems to be little controversy in regard to predetermined ways of organizing sensory information, as is the case with the detection and classification of shapes and colors by the mind.
In contrast, the idea that certain types of abstract knowledge may be part of an organism’s biological endowment (i.e., not learned) is typically met with a greater sense of skepticism. The most influential and controversial claim for such innate knowledge in modern science is Chomsky’s nativist theory of Universal Grammar in language, which aims to define the extent to which human languages can vary; and the famous Argument from the Poverty of the Stimulus. The main Chomskyan hypothesis is that all human beings share a preprogrammed linguistic infrastructure consisting of a finite set of general principles, which can generate (through combination or transformation) an infinite number of (only) grammatical sentences. Thus, the innate grammatical system constrains and structures the acquisition and use of all natural languages.
Article
Japanese Psycholinguistics
Mineharu Nakayama
The Japanese psycholinguistics research field is moving rapidly in many different directions as it includes various sub-linguistics fields (e.g., phonetics/phonology, syntax, semantics, pragmatics, discourse studies). Naturally, diverse studies have reported intriguing findings that shed light on our language mechanism. This article presents a brief overview of some of the notable early 21st century studies mainly from the language acquisition and processing perspectives. The topics are divided into various sections: the sound system, the script forms, reading and writing, morpho-syntactic studies, word and sentential meanings, and pragmatics and discourse studies sections. Studies on special populations are also mentioned.
Studies on the Japanese sound system have advanced our understanding of L1 and L2 (first and second language) acquisition and processing. For instance, more evidence is provided that infants form adult-like phonological grammar by 14 months in L1, and disassociation of prosody is reported from one’s comprehension in L2. Various cognitive factors as well as L1 influence the L2 acquisition process. As the Japanese language users employ three script forms (hiragana, katakana, and kanji) in a single sentence, orthographic processing research reveal multiple pathways to process information and the influence of memory. Adult script decoding and lexical processing has been well studied and research data from special populations further helps us to understand our vision-to-language mapping mechanism. Morpho-syntactic and semantic studies include a long debate on the nativist (generative) and statistical learning approaches in L1 acquisition. In particular, inflectional morphology and quantificational scope interaction in L1 acquisition bring pros and cons of both approaches as a single approach. Investigating processing mechanisms means studying cognitive/perceptual devices. Relative clause processing has been well-discussed in Japanese because Japanese has a different word order (SOV) from English (SVO), allows unpronounced pronouns and pre-verbal word permutations, and has no relative clause marking at the verbal ending (i.e., morphologically the same as the matrix ending). Behavioral and neurolinguistic data increasingly support incremental processing like SVO languages and an expectancy-driven processor in our L1 brain. L2 processing, however, requires more study to uncover its mechanism, as the literature is scarce in both L2 English by Japanese speakers and L2 Japanese by non-Japanese speakers. Pragmatic and discourse processing is also an area that needs to be explored further. Despite the typological difference between English and Japanese, the studies cited here indicate that our acquisition and processing devices seem to adjust locally while maintaining the universal mechanism.
Article
Locality in Syntax
Adriana Belletti
Phenomena involving the displacement of syntactic units are widespread in human languages. The term displacement refers here to a dependency relation whereby a given syntactic constituent is interpreted simultaneously in two different positions. Only one position is pronounced, in general the hierarchically higher one in the syntactic structure. Consider a wh-question like (1) in English:
(1) Whom did you give the book to
The phrase containing the interrogative wh-word is located at the beginning of the clause, and this guarantees that the clause is interpreted as a question about this phrase; at the same time, whom is interpreted as part of the argument structure of the verb give (the copy, in <> brackets). In current terms, inspired by minimalist developments in generative syntax, the phrase whom is first merged as (one of) the complement(s) of give (External Merge) and then re-merged (Internal Merge, i.e., movement) in the appropriate position in the left periphery of the clause. This peripheral area of the clause hosts operator-type constituents, among which interrogative ones (yielding the relevant interpretation: for which x, you gave a book to x, for sentence 1). Scope-discourse phenomena—such as, e.g., the raising of a question as in (1), the focalization of one constituent as in TO JOHN I gave the book (not to Mary)—have the effect that an argument of the verb is fronted in the left periphery of the clause rather than filling its clause internal complement position, whence the term displacement. Displacement can be to a position relatively close to the one of first merge (the copy), or else it can be to a position farther away. In the latter case, the relevant dependency becomes more long-distance than in (1), as in (2)a and even more so (2)b:
(2)
a Whom did Mary expect [that you would give the book to]
b Whom do you think [that Mary expected [that you would give the book to ]]
50 years or so of investigation on locality in formal generative syntax have shown that, despite its potentially very distant realization, syntactic displacement is in fact a local process. The audible position in which a moved constituent is pronounced and the position of its copy inside the clause can be far from each other. However, the long-distance dependency is split into steps through iterated applications of short movements, so that any dependency holding between two occurrences of the same constituent is in fact very local. Furthermore, there are syntactic domains that resist movement out of them, traditionally referred to as islands. Locality is a core concept of syntactic computations. Syntactic locality requires that syntactic computations apply within small domains (cyclic
domains), possibly in the mentioned iterated way (successive cyclicity), currently rethought of in terms of Phase theory. Furthermore, in the Relativized Minimality tradition, syntactic locality requires that, given X . . . Z . . . Y, the dependency between the relevant constituent in its target position X and its first merge position Y should not be interrupted by any constituent Z which is similar to X in relevant formal features and thus intervenes, blocking the relation between X and Y. Intervention locality has also been shown to allow for an explicit characterization of aspects of children’s linguistic development in their capacity to compute complex object dependencies (also relevant in different impaired populations).
Article
Paradigms in Morphology
Petar Milin and James P. Blevins
Studies of the structure and function of paradigms are as old as the Western grammatical tradition. The central role accorded to paradigms in traditional approaches largely reflects the fact that paradigms exhibit systematic patterns of interdependence that facilitate processes of analogical generalization. The recent resurgence of interest in word-based models of morphological processing and morphological structure more generally has provoked a renewed interest in paradigmatic dimensions of linguistic structure. Current methods for operationalizing paradigmatic relations and determining the behavioral correlates of these relations extend paradigmatic models beyond their traditional boundaries. The integrated perspective that emerges from this work is one in which variation at the level of individual words is not meaningful in isolation, but rather guides the association of words to paradigmatic contexts that play a role in their interpretation.
Article
Penutian Languages
Anthony P. Grant
The Penutian language family, Penutian phylum, or better still, Penutian hypothesis is one of the largest genealogical linguistic groupings to have been proposed for western North America. It involves 16 families or isolates. Only a few of these families are demonstrably relatable to one another according to current knowledge and diachronic techniques. Sometimes Penutian is split by observers into groups of languages assumed to be interrelated, and this is done without assumptions that the groups themselves are interrelated.
This article focuses on the Canadian and US languages in “Sapir’s Penutian,” the most commonly accepted version; the most southerly family within Penutian is thus held as Yokutsan of California’s Sierra Nevada. It discusses the subclassification of the so-called Penutian languages into families and smaller units; aspects of their phonology, morphosyntax, and contact histories; and issues in their revitalization and the potential reconstruction of Proto-Penutian.
Article
Polysemy
Agustín Vicente and Ingrid L. Falkum
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice.
Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses?
Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.
Article
Psycholinguistic Methods and Tasks in Morphology
Daniel Schmidtke and Victor Kuperman
Lexical representations in an individual mind are not given to direct scrutiny. Thus, in their theorizing of mental representations, researchers must rely on observable and measurable outcomes of language processing, that is, perception, production, storage, access, and retrieval of lexical information. Morphological research pursues these questions utilizing the full arsenal of analytical tools and experimental techniques that are at the disposal of psycholinguistics. This article outlines the most popular approaches, and aims to provide, for each technique, a brief overview of its procedure in experimental practice. Additionally, the article describes the link between the processing effect(s) that the tool can elicit and the representational phenomena that it may shed light on. The article discusses methods of morphological research in the two major human linguistic faculties—production and comprehension—and provides a separate treatment of spoken, written and sign language.
Article
Usage-Based Linguistics
Holger Diessel
Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning.
In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.