81-90 of 96 Results  for:

Clear all


Incorporation and Pseudo-Incorporation in Syntax  

Diane Massam

Noun incorporation (NI) is a grammatical construction where a nominal, usually bearing the semantic role of an object, has been incorporated into a verb to form a complex verb or predicate. Traditionally, incorporation was considered to be a word formation process, similar to compounding or cliticization. The fact that a syntactic entity (object) was entering into the lexical process of word formation was theoretically problematic, leading to many debates about the true nature of NI as a lexical or syntactic process. The analytic complexity of NI is compounded by the clear connections between NI and other processes such as possessor raising, applicatives, and classification systems and by its relation with case, agreement, and transitivity. In some cases, it was noted that no morpho-phonological incorporation is discernable beyond perhaps adjacency and a reduced left periphery for the noun. Such cases were termed pseudo noun incorporation, as they exhibit many properties of NI, minus any actual morpho-phonological incorporation. On the semantic side, it was noted that NI often correlates with a particular interpretation in which the noun is less referential and the predicate is more general. This led semanticists to group together all phenomena with similar semantics, whether or not they involve morpho-phonological incorporation. The role of cases of morpho-phonological NI that do not exhibit this characteristic semantics, i.e., where the incorporated nominal can be referential and the action is not general, remains a matter of debate. The interplay of phonology, morphology, syntax, and semantics that is found in NI, as well as its lexical overtones, has resulted in a wide range of analyses at all levels of the grammar. What all NI constructions share is that according to various diagnostics, a thematic element, usually correlating with an internal argument, functions to a lesser extent as an independent argument and instead acts as part of a predicate. In addition to cases of incorporation between verbs and internal arguments, there are also some cases of incorporation of subjects and adverbs, which remain less well understood.


Kiowa-Tanoan Languages  

Daniel Harbour

The Kiowa-Tanoan family is a small group of Native American languages of the Plains and pueblo Southwest. It comprises Kiowa, of the eponymous Plains tribe, and the pueblo-based Tanoan languages, Jemez (Towa), Tewa, and Northern and Southern Tiwa. These free-word-order languages display a number of typologically unusual characteristics that have rightly attracted attention within a range of subdisciplines and theories. One word of Taos (my construction based on Kontak and Kunkel’s work) illustrates. In tóm-múlu-wia ‘I gave him/her a drum,’ the verb wia ‘gave’ obligatorily incorporates its object, múlu ‘drum.’ The agreement prefix tóm encodes not only object number, but identities of agent and recipient as first and third singular, respectively, and this all in a single syllable. Moreover, the object number here is not singular, but “inverse”: singular for some nouns, plural for others (tóm-músi-wia only has the plural object reading ‘I gave him/her cats’). This article presents a comparative overview of the three areas just illustrated: from morphosemantics, inverse marking and noun class; from morphosyntax, super-rich fusional agreement; and from syntax, incorporation. The second of these also touches on aspects of morphophonology, the family’s three-tone system and its unusually heavy grammatical burden, and on further syntax, obligatory passives. Together, these provide a wide window on the grammatical wealth of this fascinating family.


Lexical Acquisition and the Structure of the Mental Lexicon  

Eve V. Clark

The words and word-parts children acquire at different stages offer insights into how the mental lexicon might be organized. Children first identify ‘words,’ recurring sequences of sounds, in the speech stream, attach some meaning to them, and, later, analyze such words further into parts, namely stems and affixes. These are the elements they store in memory in order to recognize them on subsequent occasions. They also serve as target models when children try to produce those words themselves. When they coin words, they make use of bare stems, combine certain stems with each other, and sometimes add affixes as well. The options they choose depend on how much they need to add to coin a new word, which familiar elements they can draw on, and how productive that option is in the language. Children’s uses of stems and affixes in coining new words also reveal that they must be relying on one representation in comprehension and a different representation in production. For comprehension, they need to store information about the acoustic properties of a word, taking into account different occasions, different speakers, and different dialects, not to mention second-language speakers. For production, they need to work out which articulatory plan to follow in order to reproduce the target word. And they take time to get their production of a word aligned with the representation they have stored for comprehension. In fact, there is a general asymmetry here, with comprehension being ahead of production for children, and also being far more extensive than production, for both children and adults. Finally, as children add more words to their repertoires, they organize and reorganize their vocabulary into semantic domains. In doing this, they make use of pragmatic directions from adults that help them link related words through a variety of semantic relations.


Lexical Semantic Framework for Morphology  

Marios Andreou

The central goal of the Lexical Semantic Framework (LSF) is to characterize the meaning of simple lexemes and affixes and to show how these meanings can be integrated in the creation of complex words. LSF offers a systematic treatment of issues that figure prominently in the study of word formation, such as the polysemy question, the multiple-affix question, the zero-derivation question, and the form and meaning mismatches question. LSF has its source in a confluence of research approaches that follow a decompositional approach to meaning and, thus, defines simple lexemes and affixes by way of a systematic representation that is achieved via a constrained formal language that enforces consistency of annotation. Lexical-semantic representations in LSF consist of two parts: the Semantic/Grammatical Skeleton and the Semantic/Pragmatic Body (henceforth ‘skeleton’ and ‘body’ respectively). The skeleton is comprised of features that are of relevance to the syntax. These features act as functions and may take arguments. Functions and arguments of a skeleton are hierarchically arranged. The body encodes all those aspects of meaning that are perceptual, cultural, and encyclopedic. Features in LSF are used in (a) a cross-categorial, (b) an equipollent, and (c) a privative way. This means that they are used to account for the distinction between the major ontological categories, may have a binary (i.e., positive or negative) value, and may or may not form part of the skeleton of a given lexeme. In order to account for the fact that several distinct parts integrate into a single referential unit that projects its arguments to the syntax, LSF makes use of the Principle of Co-indexation. Co-indexation is a device needed in order to tie together the arguments that come with different parts of a complex word to yield only those arguments that are syntactically active. LSF has an important impact on the study of the morphology-lexical semantics interface and provides a unitary theory of meaning in word formation.


Mayan Languages  

Nora C. England

Mayan languages are spoken by over 5 million people in Guatemala, Mexico, Belize, and Honduras. There are around 30 different languages today, ranging in size from fairly large (about a million speakers) to very small (fewer than 30 speakers). All Mayan languages are endangered given that at least some children in some communities are not learning the language, and two languages have disappeared since European contact. Mayas developed the most elaborated and most widely attested writing system in the Americas (starting about 300 BC). The sounds of Mayan languages consist of a voiceless stop and affricate series with corresponding glottalized stops (either implosive and ejective) and affricates, glottal stop, voiceless fricatives (including h in some of them inherited from Proto-Maya), two to three nasals, three to four approximants, and a five vowel system with contrasting vowel length (or tense/lax distinctions) in most languages. Several languages have developed contrastive tone. The major word classes in Mayan languages include nouns, verbs, adjectives, positionals, and affect words. The difference between transitive verbs and intransitive verbs is rigidly maintained in most languages. They usually use the same aspect markers (but not always). Intransitive verbs only indicate their subjects while transitive verbs indicate both subjects and objects. Some languages have a set of status suffixes which is different for the two classes. Positionals are a root class whose most characteristic word form is a non-verbal predicate. Affect words indicate impressions of sounds, movements, and activities. Nouns have a number of different subclasses defined on the basis of characteristics when possessed, or the structure of compounds. Adjectives are formed from a small class of roots (under 50) and many derived forms from verbs and positionals. Predicate types are transitive, intransitive, and non-verbal. Non-verbal predicates are based on nouns, adjectives, positionals, numbers, demonstratives, and existential and locative particles. They are distinct from verbs in that they do not take the usual verbal aspect markers. Mayan languages are head marking and verb initial; most have VOA flexible order but some have VAO rigid order. They are morphologically ergative and also have at least some rules that show syntactic ergativity. The most common of these is a constraint on the extraction of subjects of transitive verbs (ergative) for focus and/or interrogation, negation, or relativization. In addition, some languages make a distinction between agentive and non-agentive intransitive verbs. Some also can be shown to use obviation and inverse as important organizing principles. Voice categories include passive, antipassive and agent focus, and an applicative with several different functions.


Meanings of Constructions  

Laura A. Michaelis

Meanings are assembled in various ways in a construction-based grammar, and this array can be represented as a continuum of idiomaticity, a gradient of lexical fixity. Constructional meanings are the meanings to be discovered at every point along the idiomaticity continuum. At the leftmost, or ‘fixed,’ extreme of this continuum are frozen idioms, like the salt of the earth and in the know. The set of frozen idioms includes those with idiosyncratic syntactic properties, like the fixed expression by and large (an exceptional pattern of coordination in which a preposition and adjective are conjoined). Other frozen idioms, like the unexceptionable modified noun red herring, feature syntax found elsewhere. At the rightmost, or ‘open’ end of this continuum are fully productive patterns, including the rule that licenses the string Kim blinked, known as the Subject-Predicate construction. Between these two poles are (a) lexically fixed idiomatic expressions, verb-headed and otherwise, with regular inflection, such as chew/chews/chewed the fat; (b) flexible expressions with invariant lexical fillers, including phrasal idioms like spill the beans and the Correlative Conditional, such as the more, the merrier; and (c) specialized syntactic patterns without lexical fillers, like the Conjunctive Conditional (e.g., One more remark like that and you’re out of here). Construction Grammar represents this range of expressions in a uniform way: whether phrasal or lexical, all are modeled as feature structures that specify phonological and morphological structure, meaning, use conditions, and relevant syntactic information (including syntactic category and combinatoric potential).


Natural Language Ontology  

Friederike Moltmann

Natural language ontology is a branch of both metaphysics and linguistic semantics. Its aim is to uncover the ontological categories, notions, and structures that are implicit in the use of natural language, that is, the ontology that a speaker accepts when using a language. Natural language ontology is part of “descriptive metaphysics,” to use Strawson’s term, or “naive metaphysics,” to use Fine’s term, that is, the metaphysics of appearances as opposed to foundational metaphysics, whose interest is in what there really is. What sorts of entities natural language involves is closely linked to compositional semantics, namely what the contribution of occurrences of expressions in a sentence is taken to be. Most importantly, entities play a role as semantic values of referential terms, but also as implicit arguments of predicates and as parameters of evaluation. Natural language appears to involve a particularly rich ontology of abstract, minor, derivative, and merely intentional objects, an ontology many philosophers are not willing to accept. At the same time, a serious investigation of the linguistic facts often reveals that natural language does not in fact involve the sort of ontology that philosophers had assumed it does. Natural language ontology is concerned not only with the categories of entities that natural language commits itself to, but also with various metaphysical notions, for example the relation of part-whole, causation, material constitution, notions of existence, plurality and unity, and the mass-count distinction. An important question regarding natural language ontology is what linguistic data it should take into account. Looking at the sorts of data that researchers who practice natural language ontology have in fact taken into account makes clear that it is only presuppositions, not assertions, that reflect the ontology implicit in natural language. The ontology of language may be distinctive in that it may in part be driven specifically by language or the use of it in a discourse. Examples are pleonastic entities, discourse referents conceived of as entities of a sort, and an information-based notion of part structure involved in the semantics of plurals and mass nouns. Finally, there is the question of the universality of the ontology of natural language. Certainly, the same sort of reasoning should apply to consider it universal, in a suitable sense, as has been applied for the case of (generative) syntax.


Number in Language  

Paolo Acquaviva

Number is the category through which languages express information about the individuality, numerosity, and part structure of what we speak about. As a linguistic category it has a morphological, a morphosyntactic, and a semantic dimension, which are variously interrelated across language systems. Number marking can apply to a more or less restricted part of the lexicon of a language, being most likely on personal pronouns and human/animate nouns, and least on inanimate nouns. In the core contrast, number allows languages to refer to ‘many’ through the description of ‘one’; the sets referred to consist of tokens of the same type, but also of similar types, or of elements pragmatically associated with one named individual. In other cases, number opposes a reading of ‘one’ to a reading as ‘not one,’ which includes masses; when the ‘one’ reading is morphologically derived from the ‘not one,’ it is called a singulative. It is rare for a language to have no linguistic number at all, since a ‘one–many’ opposition is typically implied at least in pronouns, where the category of person discriminates the speaker as ‘one.’ Beyond pronouns, number is typically a property of nouns and/or determiners, although it can appear on other word classes by agreement. Verbs can also express part-structural properties of events, but this ‘verbal number’ is not isomorphic to nominal number marking. Many languages allow a variable proportion of their nominals to appear in a ‘general’ form, which expresses no number information. The main values of number-marked elements are singular and plural; dual and a much rarer trial also exist. Many languages also distinguish forms interpreted as paucals or as greater plurals, respectively, for small and usually cohesive groups and for generically large ones. A broad range of exponence patterns can express these contrasts, depending on the morphological profile of a language, from word inflections to freestanding or clitic forms; certain choices of classifiers also express readings that can be described as ‘plural,’ at least in certain interpretations. Classifiers can co-occur with other plurality markers, but not when these are obligatory as expressions of an inflectional paradigm, although this is debated, partly because the notion of classifier itself subsumes distinct phenomena. Many languages, especially those with classifiers, encode number not as an inflectional category, but through word-formation operations that express readings associated with plurality, including large size. Current research on number concerns all its morphological, morphosyntactic, and semantic dimensions, in particular the interrelations of them as part of the study of natural language typology and of the formal analysis of nominal phrases. The grammatical and semantic function of number and plurality are particularly prominent in formal semantics and in syntactic theory.


Scope Marking at the Syntax-Semantics Interface  

Veneeta Dayal and Deepak Alok

Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…t­j­…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.


Semantic Change  

Elizabeth Closs Traugott

Traditional approaches to semantic change typically focus on outcomes of meaning change and list types of change such as metaphoric and metonymic extension, broadening and narrowing, and the development of positive and negative meanings. Examples are usually considered out of context, and are lexical members of nominal and adjectival word classes. However, language is a communicative activity that is highly dependent on context, whether that of the ongoing discourse or of social and ideological changes. Much recent work on semantic change has focused, not on results of change, but on pragmatic enabling factors for change in the flow of speech. Attention has been paid to the contributions of cognitive processes, such as analogical thinking, production of cues as to how a message is to be interpreted, and perception or interpretation of meaning, especially in grammaticalization. Mechanisms of change such as metaphorization, metonymization, and subjectification have been among topics of special interest and debate. The work has been enabled by the fine-grained approach to contextual data that electronic corpora allow.