1-20 of 37 Results

  • Keywords: lexicalism x
Clear all

Article

Psycholinguistics is the study of how language is acquired, represented, and used by the human mind; it draws on knowledge about both language and cognitive processes. A central topic of debate in psycholinguistics concerns the balance between storage and processing. This debate is especially evident in research concerning morphology, which is the study of word structure, and several theoretical issues have arisen concerning the question of how (or whether) morphology is represented and what function morphology serves in the processing of complex words. Five theoretical approaches have emerged that differ substantially in the emphasis placed on the role of morphemic representations during the processing of morphologically complex words. The first approach minimizes processing by positing that all words, even morphologically complex ones, are stored and recognized as whole units, without the use of morphemic representations. The second approach posits that words are represented and processed in terms of morphemic units. The third approach is a mixture of the first two approaches and posits that a whole-access route and decomposition route operate in parallel. A fourth approach posits that both whole word representations and morphemic representations are used, and that these two types of information interact. A fifth approach proposes that morphology is not explicitly represented, but rather, emerges from the co-activation of orthographic/phonological representations and semantic representations. These competing approaches have been evaluated using a wide variety of empirical methods examining, for example, morphological priming, the role of constituent and word frequency, and the role of morphemic position. For the most part, the evidence points to the involvement of morphological representations during the processing of complex words. However, the specific way in which these representations are used is not yet fully known.

Article

Agustín Vicente and Ingrid L. Falkum

Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses. It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings. Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. Polysemy proliferates in natural language: Virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature and in related disciplines such as philosophy of language. However, polysemy is a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics–pragmatics divide. Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is represented individually in the lexicon, such that polysemy and homonymy were treated on a par. This approach has been strongly criticized on both theoretical and empirical grounds. Since at least the 1990s, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation, though the status of this representation is a matter of vivid debate: Are the lexical representations of polysemous expressions informationally poor and underspecified with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, senses might be computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation or ad hoc concept construction (including metaphorical and metonymic extension), mechanisms that apparently play a role also in explaining how polysemy arises and is implicated in lexical semantic change.

Article

The category of Personal/Participant/Inhabitant derived nouns comprises a conglomeration of derived nouns that denote among others agents, instruments, patients/themes, inhabitants, and followers of a person. Based on the thematic relations between the derived noun and its base lexeme, Personal/Participant/Inhabitant nouns can be classified into two subclasses. The first subclass comprises derived nouns that are deverbal and carry thematic readings (e.g., driver). The second subclass consists of derived nouns with athematic readings (e.g., Marxist). The examination of the category of Personal/Participant/Inhabitant nouns allows one to delve deeply into the study of multiplicity of meaning in word formation and the factors that bear on the readings of derived words. These factors range from the historical mechanisms that lead to multiplicity of meaning and the lexical-semantic properties of the bases that derived nouns are based on, to the syntactic context into which derived nouns occur, and the pragmatic-encyclopedic facets of both the base and the derived lexeme.

Article

Maria Gouskova

Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins. Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.

Article

Compound and complex predicates—predicates that consist of two or more lexical items and function as the predicate of a single sentence—present an important class of linguistic objects that pertain to an enormously wide range of issues in the interactions of morphology, phonology, syntax, and semantics. Japanese makes extensive use of compounding to expand a single verb into a complex one. These compounding processes range over multiple modules of the grammatical system, thus straddling the borders between morphology, syntax, phonology, and semantics. In terms of degree of phonological integration, two types of compound predicates can be distinguished. In the first type, called tight compound predicates, two elements from the native lexical stratum are tightly fused and inflect as a whole for tense. In this group, Verb-Verb compound verbs such as arai-nagasu [wash-let.flow] ‘to wash away’ and hare-agaru [sky.be.clear-go.up] ‘for the sky to clear up entirely’ are preponderant in numbers and productivity over Noun-Verb compound verbs such as tema-doru [time-take] ‘to take a lot of time (to finish).’ The second type, called loose compound predicates, takes the form of “Noun + Predicate (Verbal Noun [VN] or Adjectival Noun [AN]),” as in post-syntactic compounds like [sinsya : koonyuu] no okyakusama ([new.car : purchase] GEN customers) ‘customer(s) who purchase(d) a new car,’ where the symbol “:” stands for a short phonological break. Remarkably, loose compounding allows combinations of a transitive VN with its agent subject (external argument), as in [Supirubaagu : seisaku] no eiga ([Spielberg : produce] GEN film) ‘a film/films that Spielberg produces/produced’—a pattern that is illegitimate in tight compounds and has in fact been considered universally impossible in the world’s languages in verbal compounding and noun incorporation. In addition to a huge variety of tight and loose compound predicates, Japanese has an additional class of syntactic constructions that as a whole function as complex predicates. Typical examples are the light verb construction, where a clause headed by a VN is followed by the light verb suru ‘do,’ as in Tomodati wa sinsya o koonyuu (sae) sita [friend TOP new.car ACC purchase (even) did] ‘My friend (even) bought a new car’ and the human physical attribute construction, as in Sensei wa aoi me o site-iru [teacher TOP blue eye ACC do-ing] ‘My teacher has blue eyes.’ In these constructions, the nominal phrases immediately preceding the verb suru are semantically characterized as indefinite and non-referential and reject syntactic operations such as movement and deletion. The semantic indefiniteness and syntactic immobility of the NPs involved are also observed with a construction composed of a human subject and the verb aru ‘be,’ as Gakkai ni wa oozei no sankasya ga atta ‘There was a large number of participants at the conference.’ The constellation of such “word-like” properties shared by these compound and complex predicates poses challenging problems for current theories of morphology-syntax-semantics interactions with regard to such topics as lexical integrity, morphological compounding, syntactic incorporation, semantic incorporation, pseudo-incorporation, and indefinite/non-referential NPs.

Article

Multi-word expressions are linguistic objects formed by two or more words that behave like a ‘unit’ by displaying formal and/or functional idiosyncratic properties with respect to free word combinations. They include an extremely varied set of items (from idioms to collocations, from formulae to sayings) which have been the privileged subject matter of fields such as phraseology, lexicology, lexicography, and computational linguistics. Far from being a marginal phenomenon, multi-word expressions are ubiquitous and pervasive: some estimate that they are as numerous as words in some languages, which makes them as central an issue as words for the understanding of human language. However, their relation with words, and morphology, is by far less explored, not to say neglected, especially in terms of demarcation, competition, and cross-linguistic variation.

Article

The central goal of the Lexical Semantic Framework (LSF) is to characterize the meaning of simple lexemes and affixes and to show how these meanings can be integrated in the creation of complex words. LSF offers a systematic treatment of issues that figure prominently in the study of word formation, such as the polysemy question, the multiple-affix question, the zero-derivation question, and the form and meaning mismatches question. LSF has its source in a confluence of research approaches that follow a decompositional approach to meaning and, thus, defines simple lexemes and affixes by way of a systematic representation that is achieved via a constrained formal language that enforces consistency of annotation. Lexical-semantic representations in LSF consist of two parts: the Semantic/Grammatical Skeleton and the Semantic/Pragmatic Body (henceforth ‘skeleton’ and ‘body’ respectively). The skeleton is comprised of features that are of relevance to the syntax. These features act as functions and may take arguments. Functions and arguments of a skeleton are hierarchically arranged. The body encodes all those aspects of meaning that are perceptual, cultural, and encyclopedic. Features in LSF are used in (a) a cross-categorial, (b) an equipollent, and (c) a privative way. This means that they are used to account for the distinction between the major ontological categories, may have a binary (i.e., positive or negative) value, and may or may not form part of the skeleton of a given lexeme. In order to account for the fact that several distinct parts integrate into a single referential unit that projects its arguments to the syntax, LSF makes use of the Principle of Co-indexation. Co-indexation is a device needed in order to tie together the arguments that come with different parts of a complex word to yield only those arguments that are syntactically active. LSF has an important impact on the study of the morphology-lexical semantics interface and provides a unitary theory of meaning in word formation.

Article

Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.

Article

Erica H. Wojcik, Irene de la Cruz-Pavía, and Janet F. Werker

Language is a structured form of communication that is unique to humans. Within the first few years of life, typically developing children can understand and produce full sentences in their native language or languages. For centuries, philosophers, psychologists, and linguists have debated how we acquire language with such ease and speed. Central to this debate has been whether the learning process is driven by innate capacities or information in the environment. In the field of psychology, researchers have moved beyond this dichotomy to examine how perceptual and cognitive biases may guide input-driven learning and how these biases may change with experience. There is evidence that this integration permeates the learning and development of all aspects of language—from sounds (phonology), to the meanings of words (lexical-semantics), to the forms of words and the structure of sentences (morphosyntax). For example, in the area of phonology, newborns’ bias to attend to speech over other signals facilitates early learning of the prosodic and phonemic properties of their native language(s). In the area of lexical-semantics, infants’ bias to attend to novelty aids in mapping new words to their referents. In morphosyntax, infants’ sensitivity to vowels, repetition, and phrase edges guides statistical learning. In each of these areas, too, new biases come into play throughout development, as infants gain more knowledge about their native language(s).

Article

A root is a fundamental minimal unit in words. Some languages do not allow their roots to appear on their own, as in the Semitic languages where roots consist of consonant clusters that become stems or words by virtue of vowel insertion. Other languages appear to allow roots to surface without any additional morphology, as in English car. Roots are typically distinguished from affixes in that affixes need a host, although this varies within different theories. Traditionally roots have belonged to the domain of morphology. More recently, though, new theories have emerged according to which words are decomposed and subject to the same principles as sentences. That makes roots a fundamental building block of sentences, unlike words. Contemporary syntactic theories of roots hold that they have little if any grammatical information, which raises the question of how they acquire their seemingly grammatical properties. A central issue has revolved around whether roots have a lexical category inherently or whether they are given a lexical category in some other way. Two main theories are distributed morphology and the exoskeletal approach to grammar. The former holds that roots merge with categorizers in the grammar: a root combined with a nominal categorizer becomes a noun, and a root combined with a verbal categorizer becomes a verb. On the latter approach, it is argued that roots are inserted into syntactic structures which carry the relevant category, meaning that the syntactic environment is created before roots are inserted into the structure. The two views make different predictions and differ in particular in their view of the status of empty categorizers.

Article

Salvador Valera

Polysemy and homonymy are traditionally described in the context of paradigmatic lexical relations. Unlike monosemy, in which one meaning is associated with one form, and unlike synonymy, in which one meaning is associated with several forms, in polysemy and homonymy several meanings are associated with one form. The classical view of polysemy and homonymy is as a binary opposition whereby the various meanings of one form are described either as within one word (polysemy) or as within as many words as meanings (homonymy). In this approach, the decision is made according to whether the meanings can be related to one or two different sources. This classical view does not prevail in the literature as it did in the past. The most extreme revisions have questioned the descriptive synchronic difference between polysemy and homonymy or have subsumed the separation as under a general use of one term (homophony) and then established distinctions within, according to meaning and distribution. A more widespread reinterpretation of the classical opposition is in terms of a gradient where polysemy and homonymy arrange themselves along a continuum. Such a gradient arranges formally identical units at different points according to their degree of semantic proximity and degree of entrenchment (the latter understood as the degree to which a form recalls a semantic content and is activated in a speaker’s mind). The granularity of this type of gradient varies according to specific proposals, but, in the essential, the representation ranges from most and clearest proximity as well as highest degree of entrenchment in polysemy to least and most obscure proximity and lowest degree of entrenchment in homonymy.

Article

André Thibault and Nicholas LoVecchio

The Romance languages have been involved in many situations of language contact. While language contact is evident at all levels, the most visible effects on the system of the recipient language concern the lexicon. The relationship between language contact and the lexicon raises some theoretical issues that are not always adequately addressed, including in etymological lexicography. First is the very notion of what constitutes “language contact.” Contrary to a somewhat dated view, language contact does not necessarily imply physical presence, contemporaneity, and orality: as far as the lexicon is concerned, contact can happen over time and space, particularly through written media. Depending on the kind of extralinguistic circumstances at stake, language contact can be induced by diverse factors, leading to different forms of borrowing. The misleading terms borrowings or loans mask the reality that these are actually adapted imitations—whether formal, semantic, or both—of a foreign model. Likewise, the common Latin or Greek origins of a huge proportion of the Romance lexicon often obscure the real history of words. As these classical languages have contributed numerous technical and scientific terms, as well as a series of “roots,” words coined in one Romance language can easily be reproduced in any other. However, simply reducing a word’s etymology to the origin of its components (classic or otherwise), ignoring intermediate stages and possibly intermediating languages in the borrowing process, is a distortion of word history. To the extent that it is useful to refer to “internationalisms,” related words in different Romance languages merit careful, often arduous research in the process of identifying the actual origin of a given coining. From a methodological point of view, it is crucial to distinguish between the immediate lending language and the oldest stage that can be identified, with the former being more relevant in a rigorous approach to comparative historical lexicology. Concrete examples from Ibero-Romania, Gallo-Romania, Italo-Romania, and Balkan-Romania highlight the variety of different Romance loans and reflect the diverse historical factors particular to each linguistic community in which borrowing occurred.

Article

Andrew Hippisley

The morphological machinery of a language is at the service of syntax, but the service can be poor. A request may result in the wrong item (deponency), or in an item the syntax already has (syncretism), or in an abundance of choices (inflectional classes or morphological allomorphy). Network Morphology regulates the service by recreating the morphosyntactic space as a network of information sharing nodes, where sharing is through inheritance, and inheritance can be overridden to allow for the regular, irregular, and, crucially, the semiregular. The network expresses the system; the way the network can be accessed expresses possible deviations from the systematic. And so Network Morphology captures the semi-systematic nature of morphology. The key data used to illustrate Network Morphology are noun inflections in the West Slavonic language Lower Sorbian, which has three genders, a rich case system and three numbers. These data allow us to observe how Network Morphology handles inflectional allomorphy, syncretism, feature neutralization, and irregularity. Latin deponent verbs are used to illustrate a Network Morphology account of morphological mismatch, where morphosyntactic features used in the syntax are expressed by morphology regularly used for different features. The analysis points to a separation of syntax and morphology in the architecture of the grammar. An account is given of Russian nominal derivation which assumes such a separation, and is based on viewing derivational morphology as lexical relatedness. Areas of the framework receiving special focus include default inheritance, global and local inheritance, default inference, and orthogonal multiple inheritance. The various accounts presented are expressed in the lexical knowledge representation language DATR, due to Roger Evans and Gerald Gazdar.

Article

The term “part of speech” is a traditional one that has been in use since grammars of Classical Greek (e.g., Dionysius Thrax) and Latin were compiled; for all practical purposes, it is synonymous with the term “word class.” The term refers to a system of word classes, whereby class membership depends on similar syntactic distribution and morphological similarity (as well as, in a limited fashion, on similarity in meaning—a point to which we shall return). By “morphological similarity,” reference is made to functional morphemes that are part of words belonging to the same word class. Some examples for both criteria follow: The fact that in English, nouns can be preceded by a determiner such as an article (e.g., a book, the apple) illustrates syntactic distribution. Morphological similarity among members of a given word class can be illustrated by the many adverbs in English that are derived by attaching the suffix –ly, that is, a functional morpheme, to an adjective (quick, quick-ly). A morphological test for nouns in English and many other languages is whether they can bear plural morphemes. Verbs can bear morphology for tense, aspect, and mood, as well as voice morphemes such as passive, causative, or reflexive, that is, morphemes that alter the argument structure of the verbal root. Adjectives typically co-occur with either bound or free morphemes that function as comparative and superlative markers. Syntactically, they modify nouns, while adverbs modify word classes that are not nouns—for example, verbs and adjectives. Most traditional and descriptive approaches to parts of speech draw a distinction between major and minor word classes. The four parts of speech just mentioned—nouns, verbs, adjectives, and adverbs—constitute the major word classes, while a number of others, for example, adpositions, pronouns, conjunctions, determiners, and interjections, make up the minor word classes. Under some approaches, pronouns are included in the class of nouns, as a subclass. While the minor classes are probably not universal, (most of) the major classes are. It is largely assumed that nouns, verbs, and probably also adjectives are universal parts of speech. Adverbs might not constitute a universal word class. There are technical terms that are equivalents to the terms of major versus minor word class, such as content versus function words, lexical versus functional categories, and open versus closed classes, respectively. However, these correspondences might not always be one-to-one. More recent approaches to word classes don’t recognize adverbs as belonging to the major classes; instead, adpositions are candidates for this status under some of these accounts, for example, as in Jackendoff (1977). Under some other theoretical accounts, such as Chomsky (1981) and Baker (2003), only the three word classes noun, verb, and adjective are major or lexical categories. All of the accounts just mentioned are based on binary distinctive features; however, the features used differ from each other. While Chomsky uses the two category features [N] and [V], Jackendoff uses the features [Subj] and [Obj], among others, focusing on the ability of nouns, verbs, adjectives, and adpositions to take (directly, without the help of other elements) subjects (thus characterizing verbs and nouns) or objects (thus characterizing verbs and adpositions). Baker (2003), too, uses the property of taking subjects, but attributes it only to verbs. In his approach, the distinctive feature of bearing a referential index characterizes nouns, and only those. Adjectives are characterized by the absence of both of these distinctive features. Another important issue addressed by theoretical studies on lexical categories is whether those categories are formed pre-syntactically, in a morphological component of the lexicon, or whether they are constructed in the syntax or post-syntactically. Jackendoff (1977) is an example of a lexicalist approach to lexical categories, while Marantz (1997), and Borer (2003, 2005a, 2005b, 2013) represent an account where the roots of words are category-neutral, and where their membership to a particular lexical category is determined by their local syntactic context. Baker (2003) offers an account that combines properties of both approaches: words are built in the syntax and not pre-syntactically; however, roots do have category features that are inherent to them. There are empirical phenomena, such as phrasal affixation, phrasal compounding, and suspended affixation, that strongly suggest that a post-syntactic morphological component should be allowed, whereby “syntax feeds morphology.”

Article

The words and word-parts children acquire at different stages offer insights into how the mental lexicon might be organized. Children first identify ‘words,’ recurring sequences of sounds, in the speech stream, attach some meaning to them, and, later, analyze such words further into parts, namely stems and affixes. These are the elements they store in memory in order to recognize them on subsequent occasions. They also serve as target models when children try to produce those words themselves. When they coin words, they make use of bare stems, combine certain stems with each other, and sometimes add affixes as well. The options they choose depend on how much they need to add to coin a new word, which familiar elements they can draw on, and how productive that option is in the language. Children’s uses of stems and affixes in coining new words also reveal that they must be relying on one representation in comprehension and a different representation in production. For comprehension, they need to store information about the acoustic properties of a word, taking into account different occasions, different speakers, and different dialects, not to mention second-language speakers. For production, they need to work out which articulatory plan to follow in order to reproduce the target word. And they take time to get their production of a word aligned with the representation they have stored for comprehension. In fact, there is a general asymmetry here, with comprehension being ahead of production for children, and also being far more extensive than production, for both children and adults. Finally, as children add more words to their repertoires, they organize and reorganize their vocabulary into semantic domains. In doing this, they make use of pragmatic directions from adults that help them link related words through a variety of semantic relations.

Article

Katrin Erk

Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data. The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique. A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem. Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers. Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data. It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?

Article

Edward Vajda

Dene-Yeniseian is a proposed genealogical link between the widespread North American language family Na-Dene (Athabaskan, Eyak, Tlingit) and Yeniseian in central Siberia, represented today by the critically endangered Ket and several documented extinct relatives. The Dene-Yeniseian hypothesis is an old idea, but since 2006 new evidence supporting it has been published in the form of shared morphological systems and a modest number of lexical cognates showing interlocking sound correspondences. Recent data from human genetics and folklore studies also increasingly indicate the plausibility of a prehistoric (probably Late Pleistocene) connection between populations in northwestern North America and the traditionally Yeniseian-speaking areas of south-central Siberia. At present, Dene-Yeniseian cannot be accepted as a proven language family until the purported evidence supporting the lexical and morphological correspondences between Yeniseian and Na-Dene is expanded and tested by further critical analysis and their relationship to Old World families such as Sino-Tibetan and Caucasian, as well as the isolate Burushaski (all earlier proposed as relatives of Yeniseian, and sometimes also of Na-Dene), becomes clearer.

Article

Dirk Geeraerts

Lexical semantics is the study of word meaning. Descriptively speaking, the main topics studied within lexical semantics involve either the internal semantic structure of words, or the semantic relations that occur within the vocabulary. Within the first set, major phenomena include polysemy (in contrast with vagueness), metonymy, metaphor, and prototypicality. Within the second set, dominant topics include lexical fields, lexical relations, conceptual metaphor and metonymy, and frames. Theoretically speaking, the main theoretical approaches that have succeeded each other in the history of lexical semantics are prestructuralist historical semantics, structuralist semantics, and cognitive semantics. These theoretical frameworks differ as to whether they take a system-oriented rather than a usage-oriented approach to word-meaning research but, at the same time, in the historical development of the discipline, they have each contributed significantly to the descriptive and conceptual apparatus of lexical semantics.

Article

While in phonology Middle Indo-Aryan (MIA) dialects preserved the phonological system of Old Indo-Aryan (OIA) virtually intact, their morphosyntax underwent far-reaching changes, which altered fundamentally the synthetic morphology of earlier Prākrits in the direction of the analytic typology of New Indo-Aryan (NIA). Speaking holistically, the “accusative alignment” of OIA (Vedic Sanskrit) was restructured as an “ergative alignment” in Western IA languages, and it is precisely during the Late MIA period (ca. 5th–12th centuries ce) when we can observe these matters in statu nascendi. There is copious literature on the origin of the ergative construction: passive-to-ergative reanalysis; the ergative hypothesis, i.e., that the passive construction of OIA was already ergative; and a compromise stance that neither the former nor the latter approach is fully adequate. In the spirit of the complementary view of these matters, more attention has to be paid to various pathways in which typological changes operated over different kinds of nominal, pronominal and verbal constituents during the crucial MIA period. (a) We shall start with the restructuring of the nominal case system in terms of the reduction of the number of cases from seven to four. This phonologically motivated process resulted ultimately in the rise of the binary distinction of the “absolutive” versus “oblique” case at the end of the MIA period). (b) The crucial role of animacy in the restructuring of the pronominal system and the rise of the “double-oblique” system in Ardha-Māgadhī and Western Apabhramśa will be explicated. (c) In the verbal system we witness complete remodeling of the aspectual system as a consequence of the loss of earlier synthetic forms expressing the perfective (Aorist) and “retrospective” (Perfect) aspect. Early Prākrits (Pāli) preserved their sigmatic Aorists (and the sigmatic Future) until late MIA centuries, while on the Iranian side the loss of the “sigmatic” aorist was accelerated in Middle Persian by the “weakening” of s > h > Ø. (d) The development and the establishment of “ergative alignment” at the end of the MIA period will be presented as a consequence of the above typological changes: the rise of the “absolutive” vs. “oblique” case system; the loss of the finite morphology of the perfective and retrospective aspect; and the recreation of the aspectual contrast of perfectivity by means of quasinominal (participial) forms. (e) Concurrently with the development toward the analyticity in grammatical aspect, we witness the evolution of lexical aspect (Aktionsart) ushering in the florescence of “serial” verbs in New Indo-Aryan. On the whole, a contingency view of alignment considers the increase in ergativity as a by-product of the restoration of the OIA aspectual triad: Imperfective–Perfective–Perfect (in morphological terms Present–Aorist–Perfect). The NIA Perfective and Perfect are aligned ergatively, while their finite OIA ancestors (Aorist and Perfect) were aligned accusatively. Detailed linguistic analysis of Middle Indo-Aryan texts offers us a unique opportunity for a deeper comprehension of the formative period of the NIA state of affairs.

Article

Holger Diessel

Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning. In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.