1-4 of 4 Results

  • Keywords: morphological features x
Clear all

Article

The Altaic languages (Turkic, Mongolic, Tungusic) are spread across Eurasia, from Central Asia to the Middle East and the Balkans. The genetic affinity between these subgroups has not been definitively established but the commonality among features and patterns points to some linguistic connections. The main morphological operations in Altaic languages are suffixation and compounding. Generally regarded as morphologically regular with easily identifiable suffixes in which there are clear form-meaning correspondences, the languages, nevertheless, show irregularities in many domains of the phonological exponents of morphosyntactic features, such as base modification, cumulative exponence, and syncretism. Nouns are inflected for number, person, and case. Case markers can express structural relations between noun phrases and other constituents, or they can act as adpositions. Only very few of the Altaic languages have adjectival inflection. Verbs are inflected for voice, negation, tense, aspect, modality, and, in most of the languages subject agreement, varying between one and five person-number paradigms. Subject agreement is expressed through first, second, and third persons singular and plural. In the expression of tense, aspect, and modality, Altaic languages employ predominantly suffixing and compound verb formations, which involve auxiliary verbs. Inflected finite verbs can stand on their own and form propositions, and as a result, information structure can be expressed within a polymorphic word through prosodic means. Affix order is mostly fixed and mismatches occur between morpholotactic constraints and syntactico-semantic requirements. Ellipsis can occur between coordinated words. Derivational morphology is productive and occurs between and within the major word classes of nominals and verbs. Semantic categories can block other semantic categories.

Article

The central goal of the Lexical Semantic Framework (LSF) is to characterize the meaning of simple lexemes and affixes and to show how these meanings can be integrated in the creation of complex words. LSF offers a systematic treatment of issues that figure prominently in the study of word formation, such as the polysemy question, the multiple-affix question, the zero-derivation question, and the form and meaning mismatches question. LSF has its source in a confluence of research approaches that follow a decompositional approach to meaning and, thus, defines simple lexemes and affixes by way of a systematic representation that is achieved via a constrained formal language that enforces consistency of annotation. Lexical-semantic representations in LSF consist of two parts: the Semantic/Grammatical Skeleton and the Semantic/Pragmatic Body (henceforth ‘skeleton’ and ‘body’ respectively). The skeleton is comprised of features that are of relevance to the syntax. These features act as functions and may take arguments. Functions and arguments of a skeleton are hierarchically arranged. The body encodes all those aspects of meaning that are perceptual, cultural, and encyclopedic. Features in LSF are used in (a) a cross-categorial, (b) an equipollent, and (c) a privative way. This means that they are used to account for the distinction between the major ontological categories, may have a binary (i.e., positive or negative) value, and may or may not form part of the skeleton of a given lexeme. In order to account for the fact that several distinct parts integrate into a single referential unit that projects its arguments to the syntax, LSF makes use of the Principle of Co-indexation. Co-indexation is a device needed in order to tie together the arguments that come with different parts of a complex word to yield only those arguments that are syntactically active. LSF has an important impact on the study of the morphology-lexical semantics interface and provides a unitary theory of meaning in word formation.

Article

Paolo Acquaviva

Number is the category through which languages express information about the individuality, numerosity, and part structure of what we speak about. As a linguistic category it has a morphological, a morphosyntactic, and a semantic dimension, which are variously interrelated across language systems. Number marking can apply to a more or less restricted part of the lexicon of a language, being most likely on personal pronouns and human/animate nouns, and least on inanimate nouns. In the core contrast, number allows languages to refer to ‘many’ through the description of ‘one’; the sets referred to consist of tokens of the same type, but also of similar types, or of elements pragmatically associated with one named individual. In other cases, number opposes a reading of ‘one’ to a reading as ‘not one,’ which includes masses; when the ‘one’ reading is morphologically derived from the ‘not one,’ it is called a singulative. It is rare for a language to have no linguistic number at all, since a ‘one–many’ opposition is typically implied at least in pronouns, where the category of person discriminates the speaker as ‘one.’ Beyond pronouns, number is typically a property of nouns and/or determiners, although it can appear on other word classes by agreement. Verbs can also express part-structural properties of events, but this ‘verbal number’ is not isomorphic to nominal number marking. Many languages allow a variable proportion of their nominals to appear in a ‘general’ form, which expresses no number information. The main values of number-marked elements are singular and plural; dual and a much rarer trial also exist. Many languages also distinguish forms interpreted as paucals or as greater plurals, respectively, for small and usually cohesive groups and for generically large ones. A broad range of exponence patterns can express these contrasts, depending on the morphological profile of a language, from word inflections to freestanding or clitic forms; certain choices of classifiers also express readings that can be described as ‘plural,’ at least in certain interpretations. Classifiers can co-occur with other plurality markers, but not when these are obligatory as expressions of an inflectional paradigm, although this is debated, partly because the notion of classifier itself subsumes distinct phenomena. Many languages, especially those with classifiers, encode number not as an inflectional category, but through word-formation operations that express readings associated with plurality, including large size. Current research on number concerns all its morphological, morphosyntactic, and semantic dimensions, in particular the interrelations of them as part of the study of natural language typology and of the formal analysis of nominal phrases. The grammatical and semantic function of number and plurality are particularly prominent in formal semantics and in syntactic theory.

Article

Pavel Caha

The term syncretism refers to a situation where two distinct morphosyntactic categories are expressed in the same way. For instance, in English, first and third person pronouns distinguish singular from plural (I vs. we, he/she/it vs. them), but the second person pronoun (you) doesn’t. Such facts are traditionally understood in a way that English grammar distinguishes between the singular and plural in all persons. However, in the second person, the two distinct meanings are expressed the same, and the form you is understood as a form syncretic between the two different grammatical meanings. It is important to note that while the two meanings are different, they are also related: both instances of you refer to the addressee. They differ in whether they refer just to the addressee or to a group including the addressee and someone else, as depicted here. a.you (sg) = addressee b.you (pl) = addressee + others The idea that syncretism reflects meaning similarity is what makes its study interesting; a lot of research has been dedicated to figuring out the reasons why two distinct categories are marked the same. There are a number of approaches to the issue of how relatedness in meaning is to be modeled. An old idea, going back to Sanskrit grammarians, is to arrange the syncretic cells of a paradigm in such a way so that the syncretic cells would always be adjacent. Modern approaches call such arrangements geometric spaces (McCreight & Chvany, 1991) or semantic maps (Haspelmath, 2003), with the goal to depict meaning relatedness as a spatial proximity in a conceptual space. A different idea is pursued in approaches based on decomposition into discrete meaning components called features (Jakobson, 1962). Both of these approaches acknowledge the existence of two different meanings, which are related. However, there are two additional logical options to the issue of syncretism. First, one may adopt the position that the two paradigm cells correspond to a single abstract meaning, and that what appear to be different meanings/functions arises from the interaction between the abstract meaning and the specific context of use (see, for instance, Kayne, 2008 or Manzini & Savoia, 2011). Second, it could be that there are simply two different meanings expressed by two different markers, which accidentally happen to have the same phonology (like the English two and too). The three approaches are mutually contradictory only for a single phenomenon, but each of them may be correct for a different set of cases.