You are looking at 121-140 of 345 articles
Yury Lander and Johanna Nichols
Head/dependent marking (or locus of marking) is a typological parameter based on whether syntactic relations, or dependencies, are marked on the head of the relation, on the non-head, on both, on neither, or elsewhere in the constituent. It has been visible in description and comparison for some 30 years, during which time advances in analysis of phrase structure and descriptions of previously unnoticed patterns have revealed some imprecisions and gaps in the typology. The approach has figured in descriptive and theoretical work of various kinds and has proven quite useful as far as it goes, but expansion of descriptive and theoretical work on morphosyntax in the subsequent decades has revealed some gaps and inconsistencies in the original formulation. These can be removed by allowing markers to be assigned not to words but to entire phrases, a move that also allows detached and neutral marking to be more comfortably accommodated in locus theory.
In the Principles and Parameters framework of Generative Grammar, the various positions occupied by the verb have been identified as functional heads hosting inflectional material (affixes or features), which may or may not attract the verb. This gave rise to a hypothesis, the Rich Agreement Hypothesis (RAH), according to which the verb has to move to the relevant functional head when the corresponding inflectional paradigm counts as “rich.”
The RAH is motivated by synchronic and diachronic variation among closely related languages (mostly of the Germanic family) suggesting a correspondence between verb movement and rich agreement. Research into this correspondence was initially marred by the absence of a fundamental definition of “richness” and by the observation of counterexamples, both synchronically (dialects not conforming to the pattern) and diachronically (a significant time gap between the erosion of verbal inflection and the disappearance of verb movement). Also, the research was based on a limited group of related languages and dialects. This led to the conclusion that there was at best a weak correlation between verb movement and richness of morphology.
Recently, the RAH has been revived in its strong form, proposing a fundamental definition of richness and testing the RAH against a typologically more diverse sample of the languages of the world. While this represents significant progress, several problems remain, with certain (current and past) varieties of North Germanic not conforming to the expected pattern, and the typological survey yielding mixed or unclear results. A further problem is that other Germanic languages (Dutch, German, Frisian) vary as to the richness of their morphology, but show identical verb placement patterns.
This state of affairs, especially in light of recent minimalist proposals relocating both inflectional morphology and verb movement outside syntax proper (to a component in the model of grammar interfacing between narrow syntax and phonetic realization), suggests that we need a more fundamental understanding of the relation between morphology and syntax before any relation between head movement and morphological strength can be reliably ascertained.
While in phonology Middle Indo-Aryan (MIA) dialects preserved the phonological system of Old Indo-Aryan (OIA) virtually intact, their morphosyntax underwent far-reaching changes, which altered fundamentally the synthetic morphology of earlier Prākrits in the direction of the analytic typology of New Indo-Aryan (NIA). Speaking holistically, the “accusative alignment” of OIA (Vedic Sanskrit) was restructured as an “ergative alignment” in Western IA languages, and it is precisely during the Late MIA period (ca. 5th–12th centuries
(a) We shall start with the restructuring of the nominal case system in terms of the reduction of the number of cases from seven to four. This phonologically motivated process resulted ultimately in the rise of the binary distinction of the “absolutive” versus “oblique” case at the end of the MIA period). (b) The crucial role of animacy in the restructuring of the pronominal system and the rise of the “double-oblique” system in Ardha-Māgadhī and Western Apabhramśa will be explicated. (c) In the verbal system we witness complete remodeling of the aspectual system as a consequence of the loss of earlier synthetic forms expressing the perfective (Aorist) and “retrospective” (Perfect) aspect. Early Prākrits (Pāli) preserved their sigmatic Aorists (and the sigmatic Future) until late MIA centuries, while on the Iranian side the loss of the “sigmatic” aorist was accelerated in Middle Persian by the “weakening” of s > h > Ø. (d) The development and the establishment of “ergative alignment” at the end of the MIA period will be presented as a consequence of the above typological changes: the rise of the “absolutive” vs. “oblique” case system; the loss of the finite morphology of the perfective and retrospective aspect; and the recreation of the aspectual contrast of perfectivity by means of quasinominal (participial) forms. (e) Concurrently with the development toward the analyticity in grammatical aspect, we witness the evolution of lexical aspect (Aktionsart) ushering in the florescence of “serial” verbs in New Indo-Aryan.
On the whole, a contingency view of alignment considers the increase in ergativity as a by-product of the restoration of the OIA aspectual triad: Imperfective–Perfective–Perfect (in morphological terms Present–Aorist–Perfect). The NIA Perfective and Perfect are aligned ergatively, while their finite OIA ancestors (Aorist and Perfect) were aligned accusatively. Detailed linguistic analysis of Middle Indo-Aryan texts offers us a unique opportunity for a deeper comprehension of the formative period of the NIA state of affairs.
The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages:
The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth.
The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context.
The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–).
The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.
Silvio Moreira de Sousa, Johannes Mücke, and Philipp Krämer
As an institutionalized subfield of academic research, Creole studies (or Creolistics) emerged in the second half of the 20th century on the basis of pioneering works in the last decades of the 19th century and first half of the 20th century. Yet its research traditions—just like the Creole languages themselves—are much older and are deeply intertwined with the history of European colonialism, slavery, and Christian missionary activities all around the globe. Throughout the history of research, creolists focused on the emergence of Creole languages and their grammatical structures—often in comparison to European colonial languages. In connection with the observations in grammar and history, creolists discussed theoretical matters such as the role of language acquisition in creolization, the status of Creoles among the other languages in the world, and the social conditions in which they are or were spoken. These discussions molded the way in which the acquired knowledge was transmitted to the following generations of creolists.
The grammatization of European vernacular languages began in the Late Middle Ages and Renaissance and continued up until the end of the 18th century. Through this process, grammars were written for the vernaculars and, as a result, the vernaculars were able to establish themselves in important areas of communication. Vernacular grammars largely followed the example of those written for Latin, using Latin descriptive categories without fully adapting them to the vernaculars. In accord with the Greco-Latin tradition, the grammars typically contain sections on orthography, prosody, morphology, and syntax, with the most space devoted to the treatment of word classes in the section on “etymology.” The earliest grammars of vernaculars had two main goals: on the one hand, making the languages described accessible to non-native speakers, and on the other, supporting the learning of Latin grammar by teaching the grammar of speakers’ native languages. Initially, it was considered unnecessary to engage with the grammar of native languages for their own sake, since they were thought to be acquired spontaneously. Only gradually did a need for normative grammars develop which sought to codify languages. This development relied on an awareness of the value of vernaculars that attributed a certain degree of perfection to them. Grammars of indigenous languages in colonized areas were based on those of European languages and today offer information about the early state of those languages, and are indeed sometimes the only sources for now extinct languages. Grammars of vernaculars came into being in the contrasting contexts of general grammar and the grammars of individual languages, between grammar as science and as art and between description and standardization. In the standardization of languages, the guiding principle could either be that of anomaly, which took a particular variety of a language as the basis of the description, or that of analogy, which permitted interventions into a language aimed at making it more uniform.
Ans van Kemenade
The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.
The basic vocabulary of Portuguese—the second largest Romance language in terms of speakers (about 210 million as of 2017)—comes from (vulgar) Latin, which itself incorporated a certain amount of so-called substratum and superstratum words. Whereas the former were adopted in a situation of language contact between Latin and the languages of the conquered peoples inhabiting the Iberian Peninsula, the latter are Germanic loans brought mainly by the Visigoths. From 711 onward, until the end of the Middle Ages, Arabic played a major role in the Peninsula, contributing about 1,000 words that are common in Modern Portuguese. (Classical) Latin and Greek were other sources for lexical enrichment especially in the 15th and 16th centuries as well as in the 18th and 19th centuries. Contact with other European languages—Romance and Germanic (especially English, and to a lower extent German)—led to borrowings in several thematic fields reflecting the economic, cultural, and scientific radiance that emanated from the respective language communities. In the course of colonial expansion, Portuguese came into contact with several African, Asian, and Amerindian languages from which it borrowed words for concepts and realia unknown to the Western world.
Ever since the fundamental studies carried out by the great German Romanist Max Leopold Wagner (b. 1880–d. 1962), the acknowledged founder of scientific research on Sardinian, the lexicon has been, and still is, one of the most investigated and best-known areas of the Sardinian language.
Several substrate components stand out in the Sardinian lexicon around a fundamental layer which has a clear Latin lexical background. The so-called Paleo-Sardinian layer is particularly intriguing. This is a conventional label for the linguistic varieties spoken in the prehistoric and protohistoric ages in Sardinia. Indeed, the relatively large amount of words (toponyms in particular) which can be traced back to this substrate clearly distinguishes the Sardinian lexicon within the panorama of the Romance languages. As for the other Pre-Latin substrata, the Phoenician-Punic presence mainly (although not exclusively) affected southern and western Sardinia, where we find the highest concentration of Phoenician-Punic loanwords.
On the other hand, recent studies have shown that the Latinization of Sardinia was more complex than once thought. In particular, the alleged archaic nature of some features of Sardinian has been questioned.
Moreover, research carried out in recent decades has underlined the importance of the Greek Byzantine superstrate, which has actually left far more evident lexical traces than previously thought. Finally, from the late Middle Ages onward, the contributions from the early Italian, Catalan, and Spanish superstrates, as well as from modern and contemporary Italian, have substantially reshaped the modern-day profile of the Sardinian lexicon. In these cases too, more recent research has shown a deeper impact of these components on the Sardinian lexicon, especially as regards the influence of Italian.
David R. Mortensen
Hmong-Mien (also known as Miao-Yao) is a bipartite family of minority languages spoken primarily in China and mainland Southeast Asia. The two branches, called Hmongic and Mienic by most Western linguists and Miao and Yao by Chinese linguists, are both compact groups (phylogenetically if not geographically). Although they are uncontroversially distinct from one another, they bear a strong mutual affinity. But while their internal relationships are reasonably well established, there is no unanimity regarding their wider genetic affiliations, with many Chinese scholars insisting on Hmong-Mien membership in the Sino-Tibetan superfamily, some Western scholars suggesting a relationship to Austronesian and/or Tai-Kradai, and still others suggesting a relationship to Mon-Khmer. A plurality view appears to be that Hmong-Mien bears no special relationship to any surviving language family.
Hmong-Mien languages are typical—in many respects—of the non-Sino-Tibetan languages of Southern China and mainland Southeast Asia. However, they possess a number of properties that make them stand out. Many neighboring languages are tonal, but Hmong-Mien languages are, on average, more so (in terms of the number of tones). While some other languages in the area have small-to-medium consonant inventories, Hmong-Mien languages (and especially Hmongic languages) often have very large consonant inventories with rare classes of sounds like uvulars and voiceless sonorants. Furthermore, while many of their neighbors are morphologically isolating, few language groups display as little affixation as Hmong-Mien languages. They are largely head-initial, but they deviate from this generalization in their genitive-noun constructions and their relative clauses (which vary in position and structure, sometimes even within the same language).
Hokan is a linguistic stock or phylum based on a series of hypotheses about deeper genetic relationships among languages that extend geographically from Northern California to Nicaragua. Following the general effort to genetically link the vast number of Native American languages and to reduce them to a few superstocks, Dixon and Kroeber first proposed the Hokan stock in 1913, to include several California indigenous languages: Karuk, Chimariko, Shastan, Palaihnihan (Atsugewi and Achumawi), Pomoan, Yana, and later Esselen and Yuman. The name Hokan stems from the Atsugewi word for “two”: hoqi. While the first proposals by Dixon and Kroeber rested on very limited cognate sets comprising only five words, later assessments by Sapir included hundreds of putative cognate sets and analyses of Hokan morphosyntax. By 1925, Sapir further included Washo, Salinan, Seri, Chumashan, Tequistlatecan, and Subtiaba-Tlapanec as the Southern Hokan branch into the stock.
Throughout the 20th century, scholars sought additional evidence for the stock as more and refined data on the languages became available. A number of languages were added, and earlier proposals were abandoned. A new surge in work on individual California indigenous languages in the 1950s and 1960s prompted a string of studies conducting binary comparisons. This renewed interest inspired a series of Hokan conferences held until the 1990s. A more recent comprehensive assessment of the entire stock was undertaken by Kaufman in 1988. Applying rigorous analysis and only implicating those languages for which he encountered substantial evidence, Kaufman proposes sixteen classificatory units for Hokan clustered geographically. Kaufman’s Hokan stock also includes Coahuilteco and Comecrudan in Mexico and Jicaque in Nicaragua.
Although Hokan was widely studied in the 20th century, and many scholars presented what they thought to be supporting evidence, it is far from being an established genetic unit. In fact, many scholars today treat it with a lot of skepticism. One major challenge, as with any phylum-level affiliation, is its time depth. Proto-Hokan is thought to be at least as antique as Proto-Indo-European. Moreover, many of the languages were spoken in geographically contiguous areas, with speakers being multilingual and in close contact for an extended period of time, as is the case in Northern California. This suggests considerable language contact effects and complicates the distinction between true cognates and ancient borrowings. Many of the languages involved further show similarities in grammatical structure as a result of language contact.
Hokan languages stretch across California, Nevada, South Texas, various parts of Mexico, Honduras, and Nicaragua and display notable structural differences. Phonologically, the languages show great variation including small and large phoneme inventories and different phonological processes. Typologically, they are equally diverse, but many are considered polysynthetic to varying degrees. Morphosyntactic and grammatical similarities are evident especially among languages spoken in Northern California. These resemblances include sets of lexical affixes with similar meanings and affinities in core argument patterns.
Interest in the linguistics of humor is widespread and dates since classical times. Several theoretical models have been proposed to describe and explain the function of humor in language. The most widely adopted one, the semantic-script theory of humor, was presented by Victor Raskin, in 1985. Its expansion, to incorporate a broader gamut of information, is known as the General Theory of Verbal Humor. Other approaches are emerging, especially in cognitive and corpus linguistics. Within applied linguistics, the predominant approach is analysis of conversation and discourse, with a focus on the disparate functions of humor in conversation. Speakers may use humor pro-socially, to build in-group solidarity, or anti-socially, to exclude and denigrate the targets of the humor. Most of the research has focused on how humor is co-constructed and used among friends, and how speakers support it. Increasingly, corpus-supported research is beginning to reshape the field, introducing quantitative concerns, as well as multimodal data and analyses. Overall, the linguistics of humor is a dynamic and rapidly changing field.
Irit Meir and Oksana Tkachman
Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry.
There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world.
Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.
Kimi Akita and Mark Dingemanse
Ideophones, also termed mimetics or expressives, are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in the languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.
Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalized depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
M. Teresa Espinal and Jaume Mateu
Idioms, conceived as fixed multi-word expressions that conceptually encode non-compositional meaning, are linguistic units that raise a number of questions relevant in the study of language and mind (e.g., whether they are stored in the lexicon or in memory, whether they have internal or external syntax similar to other expressions of the language, whether their conventional use is parallel to their non-compositional meaning, whether they are processed in similar ways to regular compositional expressions of the language, etc.). Idioms show some similarities and differences with other sorts of formulaic expressions, the main types of idioms that have been characterized in the linguistic literature, and the dimensions on which idiomaticity lies. Syntactically, idioms manifest a set of syntactic properties, as well as a number of constraints that account for their internal and external structure. Semantically, idioms present an interesting behavior with respect to a set of semantic properties that account for their meaning (i.e., conventionality, compositionality, and transparency, as well as aspectuality, referentiality, thematic roles, etc.). The study of idioms has been approached from lexicographic and computational, as well as from psycholinguistic and neurolinguistic perspectives.
Noun incorporation (NI) is a grammatical construction where a nominal, usually bearing the semantic role of an object, has been incorporated into a verb to form a complex verb or predicate. Traditionally, incorporation was considered to be a word formation process, similar to compounding or cliticization. The fact that a syntactic entity (object) was entering into the lexical process of word formation was theoretically problematic, leading to many debates about the true nature of NI as a lexical or syntactic process. The analytic complexity of NI is compounded by the clear connections between NI and other processes such as possessor raising, applicatives, and classification systems and by its relation with case, agreement, and transitivity. In some cases, it was noted that no morpho-phonological incorporation is discernable beyond perhaps adjacency and a reduced left periphery for the noun. Such cases were termed pseudo noun incorporation, as they exhibit many properties of NI, minus any actual morpho-phonological incorporation. On the semantic side, it was noted that NI often correlates with a particular interpretation in which the noun is less referential and the predicate is more general. This led semanticists to group together all phenomena with similar semantics, whether or not they involve morpho-phonological incorporation. The role of cases of morpho-phonological NI that do not exhibit this characteristic semantics, i.e., where the incorporated nominal can be referential and the action is not general, remains a matter of debate. The interplay of phonology, morphology, syntax, and semantics that is found in NI, as well as its lexical overtones, has resulted in a wide range of analyses at all levels of the grammar. What all NI constructions share is that according to various diagnostics, a thematic element, usually correlating with an internal argument, functions to a lesser extent as an independent argument and instead acts as part of a predicate. In addition to cases of incorporation between verbs and internal arguments, there are also some cases of incorporation of subjects and adverbs, which remain less well understood.
Inflection is the systematic relation between words’ morphosyntactic content and their morphological form; as such, the phenomenon of inflection raises fundamental questions about the nature of morphology itself and about its interfaces. Within the domain of morphology proper, it is essential to establish how (or whether) inflection differs from other kinds of morphology and to identify the ways in which morphosyntactic content can be encoded morphologically. A number of different approaches to modeling inflectional morphology have been proposed; these tend to cluster into two main groups, those that are morpheme-based and those that are lexeme-based. Morpheme-based theories tend to treat inflectional morphology as fundamentally concatenative; they tend to represent an inflected word’s morphosyntactic content as a compositional summing of its morphemes’ content; they tend to attribute an inflected word’s internal structure to syntactic principles; and they tend to minimize the theoretical significance of inflectional paradigms. Lexeme-based theories, by contrast, tend to accord concatenative and nonconcatenative morphology essentially equal status as marks of inflection; they tend to represent an inflected word’s morphosyntactic content as a property set intrinsically associated with that word’s paradigm cell; they tend to assume that an inflected word’s internal morphology is neither accessible to nor defined by syntactic principles; and they tend to treat inflection as the morphological realization of a paradigm’s cells. Four important issues for approaches of either sort are the nature of nonconcatenative morphology, the incidence of extended exponence, the underdetermination of a word’s morphosyntactic content by its inflectional form, and the nature of word forms’ internal structure. The structure of a word’s inventory of inflected forms—its paradigm—is the locus of considerable cross-linguistic variation. In particular, the canonical relation of content to form in an inflectional paradigm is subject to a wide array of deviations, including inflection-class distinctions, morphomic properties, defectiveness, deponency, metaconjugation, and syncretism; these deviations pose important challenges for understanding the interfaces of inflectional morphology, and a theory’s resolution of these challenges depends squarely on whether that theory is morpheme-based or lexeme-based.
A fundamental question in epistemological philosophy is whether reason may be based on a priori knowledge—that is, knowledge that precedes and which is independent of experience. In modern science, the concept of innateness has been associated with particular behaviors and types of knowledge, which supposedly have been present in the organism since birth (in fact, since fertilization)—prior to any sensory experience with the environment.
This line of investigation has been traditionally linked to two general types of qualities: the first consists of instinctive and inflexible reflexes, traits, and behaviors, which are apparent in survival, mating, and rearing activities. The other relates to language and cognition, with certain concepts, ideas, propositions, and particular ways of mental computation suggested to be part of one’s biological make-up. While both these types of innatism have a long history (e.g., debate by Plato and Descartes), some bias appears to exist in favor of claims for inherent behavioral traits, which are typically accepted when satisfactory empirical evidence is provided. One famous example is Lorenz’s demonstration of imprinting, a natural phenomenon that obeys a predetermined mechanism and schedule (incubator-hatched goslings imprinted on Lorenz’s boots, the first moving object they encountered). Likewise, there seems to be little controversy in regard to predetermined ways of organizing sensory information, as is the case with the detection and classification of shapes and colors by the mind.
In contrast, the idea that certain types of abstract knowledge may be part of an organism’s biological endowment (i.e., not learned) is typically met with a greater sense of skepticism. The most influential and controversial claim for such innate knowledge in modern science is Chomsky’s nativist theory of Universal Grammar in language, which aims to define the extent to which human languages can vary; and the famous Argument from the Poverty of the Stimulus. The main Chomskyan hypothesis is that all human beings share a preprogrammed linguistic infrastructure consisting of a finite set of general principles, which can generate (through combination or transformation) an infinite number of (only) grammatical sentences. Thus, the innate grammatical system constrains and structures the acquisition and use of all natural languages.
The Iroquoian languages are spoken today in New York State, Ontario, Quebec, Wisconsin, North Carolina, and Oklahoma. The languages share a relatively small segment inventory, a challenging accentual system, polysynthetic morphology, a complex system of pronominal affixes, an unusual kinship terminology, and a syntax that functions almost exclusively to combine the meaning of two expressions. Some of the languages have been documented since contact with Europeans in the 16th century. There exists substantial scholarly linguistic work on most of the languages, and solid teaching materials continue to be developed.
Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.