21-35 of 35 Results

  • Keywords: lexicalism x
Clear all

Article

Ljuba N. Veselinova

The term suppletion is used to indicate the unpredictable encoding of otherwise regular semantic or grammatical relations. Standard examples in English include the present and past tense of the verb go, cf. go vs. went, or the comparative and superlative forms of adjectives such as good or bad, cf. good vs. better vs. best, or bad vs. worse vs. worst. The complementary distribution of different forms to express a paradigmatic contrast has been noticed already in early grammatical traditions. However, the idea that a special form would supply missing forms in a paradigm was first introduced by the neogrammarian Hermann Osthoff, in his work of 1899. The concept of suppletion was consolidated in modern linguistics by Leonard Bloomfield, in 1926. Since then, the notion has been applied to both affixes and stems. In addition to the application of the concept to linguistic units of varying morpho-syntactic status, such as affixes, or stems of different lexical classes such as, for instance, verbs, adjectives, or nouns, the student should also be prepared to encounter frequent discrepancies between uses of the concept in the theoretical literature and its application in more descriptively oriented work. There are models in which the term suppletion is restricted to exceptions to inflectional patterns only; consequently, exceptions to derivational patterns are not accepted as instantiations of the phenomenon. Thus, the comparative degrees of adjectives will be, at best, less prototypical examples of suppletion. Treatments of the phenomenon vary widely, to the point of being complete opposites. A strong tendency exists to regard suppletion as an anomaly, a historical artifact, and generally of little theoretical interest. A countertendency is to view the phenomenon as challenging, but nonetheless very important for adequate theory formation. Finally, there are scholars who view suppletion as a functionally motivated result of language change. For a long time, the database on suppletion, similarly to many other phenomena, was restricted to Indo-European languages. With the solidifying of wider cross-linguistic research and linguistic typology since the 1990s, the database on suppletion has been substantially extended. Large-scale cross-linguistic studies have shown that the phenomenon is observed in many different languages around the globe. In addition, it appears as a systematic cross-linguistic phenomenon in that it can be correlated with well-defined language areas, language families, specific lexemic groups, and specific slots in paradigms. The latter can be shown to follow general markedness universals. Finally, the lexemes that show suppletion tend to have special functions in both lexicon and grammar.

Article

A fundamental difference in theoretical models of morphology and, particularly, of the syntax–morphology interface is that between endoskeletal and exoskeletal approaches. In the former, more traditional, endoskeletal approaches, open-class lexical items like cat or sing are held to be inherently endowed with a series of formal features that determine the properties of the linguistic expressions in which they appear. In the latter, more recent, exoskeletal approaches, it is rather the morphosyntactic configurations, independently produced by the combination of abstract functional elements, that determine those properties. Lexical items, in this latter approach, are part of the structure but, crucially, do not determine it. Conceptually, although a correlation is usually made between endoskeletalism and lexicalism/projectionism, on the one hand, and between exoskeletalism and (neo)constructionism, on the other, things are actually more complicated, and some frameworks exist that seem to challenge those correlations, in particular when the difference between word and morpheme is taken into account. Empirically, the difference between these two approaches to morphology and the morphology-syntax interface comes to light when one examines how each one treats a diversity of word-related phenomena: morphosyntactic category and category shift in derivational processes, inflectional class, nominal properties like mass or count, and verbal properties like agentivity and (a)telicity.

Article

In morphology, the two labels ‘collective’ and ‘abstract’ have been used to refer to properties and categories relevant at different levels. The term collective is normally used in connection with number and plurality in reference to a plurality presented as a homogeneous group of entities. This can be relevant for inflectional morphology where it can be shown to flank markers for coding number in some languages. Moreover, a plurality intended as a homogeneous group of individuals can also be relevant for word-formation patterns where it usually expresses concrete or abstract sets of objects relating to the derivational base. The term abstract makes general reference to processes of nominalization from different source classes, especially verbs and adjectives. In the passage to the nominal domain, verbal properties like tense and argument structure are partially lost while new nominal properties are acquired. In particular, a number of semantic shifts are observed which turn the abstract noun into a concrete noun referring to the result, the place, etc. relating to the derivational base. Although the morphological processes covered by the two labels apparently depict different conceptual domains, there is in fact an area where they systematically overlap, namely with deverbal nouns denoting an abstract or concrete, iterated or habitual instantiation of the action referred to by the verbal base, which can be conceptualized as a collective noun.

Article

Birgit Alber and Sabine Arndt-Lappe

Work on the relationship between morphology and metrical structure has mainly addressed three questions: 1. How does morphological constituent structure map onto prosodic constituent structure, i.e., the structure that is responsible for metrical organization? 2. What are the reflexes of morphological relations between complex words and their bases in metrical structure? 3. How variable or categorical are metrical alternations? The focus in the work specified in question 1 has been on establishing prosodic constituency with supported evidence from morphological constituency. Pertinent prosodic constituents are the prosodic (or phonological) word, the metrical foot, the syllable, and the mora (Selkirk, 1980). For example, the phonological behavior of certain affixes has been used to argue that they are word-internal prosodic words, which thus means that prosodic words may be recursive structures (e.g., Aronoff & Sridhar, 1987). Similarly, the shape of truncated words has been used as evidence for the shape of the metrical foot (cf., e.g., Alber & Arndt-Lappe, 2012). Question 2 considers morphologically conditioned metrical alternations. Stress alternations have received particular attention. Affixation processes differ in whether or not they exhibit stress alternations. Affixes that trigger stress alternations are commonly referred to as 'stress-shifting' affixes, those that do not are referred to as 'stress-preserving' affixes. The fact that morphological categories differ in their stress behavior has figured prominently in theoretical debates about the phonology-morphology interface, in particular between accounts that assume a stratal architecture with interleaving phonology-morphology modules (such as lexical phonology, esp. Kiparsky, 1982, 1985) and those that assume that morphological categories come with their own phonologies (e.g., Inkelas, Orgun, & Zoll, 1997; Inkelas & Zoll, 2007; Orgun, 1996). Question 3 looks at metrical variation and its relation to the processing of morphologically complex words. There is a growing body of recent empirical work showing that some metrical alternations seem variable (e.g., Collie, 2008; Dabouis, 2019). This means that different stress patterns occur within a single morphological category. Theoretical explanations of the phenomenon vary depending on the framework adopted. However, what unites pertinent research seems to be that the variation is codetermined by measures that are usually associated with lexical storage. These are semantic transparency, productivity, and measures of lexical frequency.

Article

Subordinate and synthetic represent well-attested modes of compounding across languages. Although the two classes exhibit some structural and interpretative analogies cross-linguistically, they denote distinct phenomena and entail different parameters of classification. Specifically, subordinate makes reference to the grammatical relation between the compound members, which hold a syntactic dependency (i.e., head-argument) relation; synthetic makes reference to the synthesis or concomitance of two processes (i.e., compounding and derivation). Therefore, while the former term implies the presence of a syntactic relation realized at the word level, the latter has strictly morphological implications and does not directly hinge on the nature of the relation between the compound members. Typical examples of subordinate compounds are [V+N]N formations like pickpocket, a class which is scarcely productive in English but largely attested in most Romance and many other languages (e.g., Italian lavapiatti ‘wash-dishes, dishwater’). Other instances of subordinate compounds are of the type [V+N]V, differing from the pickpocket type since the output is a verb, as in Chinese dài-găng ‘wait for-post, wait for a job’. The presence of a verb, however, is not compulsory since possible instances of subordinate compounds can be found among [N+N]N, [A+N]A, and [P+N]N/A compounds, among others: The consistent feature across subordinate compounds is the complementation relation holding between the constituents, whereby one of the two fills in an argumental slot of the other constituent. For instance, the N tetto ‘roof’ complements P in the Italian compound senza-tetto ‘without-roof, homeless person’, and the N stazione ‘station’ is the internal argument of the relational noun capo in capo-stazione ‘chief-station, station-master’. Synthetic compounds can envisage a subordination relation, as in truck driv-er/-ing, where truck is the internal argument of driver (or driving), so that they are often viewed as the prototypical subordinates. However, subordination does not feature in all synthetic compounds whose members can hold a modification/attribution relation, as in short-legged and three-dimensional: In these cases, the adjective (or numeral) is not an argument but a modifier of the other constituent. The hallmark of a synthetic compound is the presence of a derivational affix having scope over a compound/complex form, though being linearly attached and forming an established (or possible) word with one constituent only. This mismatch between semantics and formal structure has engendered a lively theoretical debate about the nature of these formations. Adopting a binary-branching analysis of morphological complexes, the debate has considered whether the correct analysis for synthetic compounds is the one shown in (1) or (2), which implies answering the question whether derivation applies before or after compounding. (1) a.[[truck] [driv-er]] b. [[short] [leg(g)-ed]] (2) a. [[[truck] [drive]] -er] b. [[[short] [leg(g)]]-ed] Interestingly, the structural and interpretative overlap between subordinate and synthetic compounds with a deverbal head is well represented across language groups: Synthetic compounds of the type in (1–2) are very productive in Germanic languages but virtually absent in Romance languages, where this gap is compensated for by the productive class of subordinate [V+N]N compounds, like Italian porta-lettere ‘carry-letters, mailman’, which are the interpretative analogous of Germanic synthetic formations. The difference between the two complexes lies in constituent order, V+N in Romance versus N+V in Germanic, and lack of an (overt) derivational affix in Romance languages.

Article

The Lexical Integrity Hypothesis (LIH) holds that words are syntactic atoms, implying that syntactic processes and principles do not have access to word segments. Interestingly, when this widespread “negative characterization” is turned into its positive version, a standard picture of the Morphology-Syntax borderline is obtained. The LIH is both a fundamental principle of Morphology and a test bench for morphological theories. As a matter of fact, the LIH is problematic for both lexicalist and anti-lexicalist frameworks, which radically differ in accepting or rejecting Morphology as a component of grammar different from Syntax. Lexicalist theories predict no exceptions to LIH, contrary to fact. From anti-lexicalist theories one might expect a large set of counterexamples to this hypothesis, but the truth is that attested potential exceptions are restricted, as well as confined to very specific grammatical areas. Most of the phenomena taken to be crucial for evaluating the LIH are briefly addressed in this article: argument structure, scope, prefixes, compounds, pronouns, elliptical segments, bracketing paradoxes, and coordinated structures. It is argued that both lexicalist and anti-lexicalist positions crucially depend on the specific interpretations that their proponents are willing to attribute to the very notion of Syntax: a broad one, which basically encompasses constituent structure, binary branching, scope, and compositionality, and a narrow one, which also coverts movement, recursion, deletion, coordination, and other aspects of phrase structure. The objective differences between these conceptions of Syntax are shown to be determinant in the evaluation of LIH’s predictions.

Article

Grammaticalization is traditionally defined as the gradual process whereby a lexical item becomes a grammatical item (primary grammaticalization), which may be followed by further formal and semantic reduction (secondary grammaticalization). It is a composite change that may affect both phonological, morphological, syntactic, and semantic-pragmatic properties of a morpheme, and it is found in all the world’s languages. On the level of morphology, grammaticalization has been shown to have various effects, ranging from the loss of inflection in primary grammaticalization to the development of bound morphemes or new inflectional classes in secondary grammaticalization. Well-known examples include the development of future auxiliaries from motion verbs (e.g., English to be going to), and the development of the Romance inflection future (e.g., French chanter-ai ‘I sing’, chanter-as ‘you sing’, etc., from a verb meaning ‘to have’). Although lexical-grammatical change is overwhelmingly unidirectional, shifts in the reverse direction, called degrammaticalization, have also been shown to occur. Like grammaticalization, degrammaticalization is a composite change, which is characterized by an increase in phonological and semantic substance as well as in morphosyntactic autonomy. Accordingly, the effects on morphology are different from those in grammaticalization. In primary degrammaticalization new inflections may be acquired (e.g., the Welsh verb nôl ‘to fetch,’ from an adposition meaning ‘after’), and erstwhile bound morphemes may become free morphemes (e.g., English ish). As such effects are also found in other types of changes, degrammaticalization needs to be clearly delineated from those. For example, a shift from a minor to a major category (e.g., English ifs and buts) or the lexicalization of bound affixes (isms), likewise result in new inflections, but these are instantaneous changes, not gradual ones.

Article

Maria Koptjevskaja-Tamm and Ljuba N. Veselinova

The goal of this chapter is to explicate the common ground and shared pursuits of lexical typology and morphology. Bringing those to the fore is beneficial to the scholarship of both disciplines and will allow their methodologies to be combined in more fruitful ways. In fact, such explication also opens up a whole new domain of study. This overview article focuses on a set of important research questions common to both lexical typology and morphology. Specifically, it considers vocabulary structure in human languages, cross-linguistic research on morphological analysis and word formation, and finally inventories of very complex lexical items. After a critical examination of the pertinent literature, some directions for future research are suggested. Some of them include working out methodologies for more systematic exploration of vocabulary structure and further scrutiny of how languages package and distribute semantic material among linguistic units. Finally, more effort is to be devoted to the study of vocabularies where basic concepts are encoded by complex lexical items.

Article

Terttu Nevalainen

In the Early Modern English period (1500–1700), steps were taken toward Standard English, and this was also the time when Shakespeare wrote, but these perspectives are only part of the bigger picture. This chapter looks at Early Modern English as a variable and changing language not unlike English today. Standardization is found particularly in spelling, and new vocabulary was created as a result of the spread of English into various professional and occupational specializations. New research using digital corpora, dictionaries, and databases reveals the gradual nature of these processes. Ongoing developments were no less gradual in pronunciation, with processes such as the Great Vowel Shift, or in grammar, where many changes resulted in new means of expression and greater transparency. Word order was also subject to gradual change, becoming more fixed over time.

Article

Natsuko Tsujimura

The rigor and intensity of investigation on Japanese in modern linguistics has been particularly noteworthy over the past 50 years. Not only has the elucidation of the similarities to and differences from other languages properly placed Japanese on the typological map, but Japanese has served as a critical testing area for a wide variety of theoretical approaches. Within the sub-fields of Japanese phonetics and phonology, there has been much focus on the role of mora. The mora constitutes an important timing unit that has broad implications for analysis of the phonetic and phonological system of Japanese. Relatedly, Japanese possesses a pitch-accent system, which places Japanese in a typologically distinct group arguably different from stress languages, like English, and tone languages, like Chinese. A further area of intense investigation is that of loanword phonology, illuminating the way in which segmental and suprasegmental adaptations are processed and at the same time revealing the fundamental nature of the sound system intrinsic to Japanese. In morphology, a major focus has been on compounds, which are ubiquitously found in Japanese. Their detailed description has spurred in-depth discussion regarding morphophonological (e.g., Rendaku—sequential voicing) and morphosyntactic (e.g., argument structure) phenomena that have crucial consequences for morphological theory. Rendaku is governed by layers of constraints that range from segmental and prosodic phonology to structural properties of compounds, and serves as a representative example in demonstrating the intricate interaction of the different grammatical aspects of the language. In syntax, the scrambling phenomenon, allowing for the relatively flexible permutation of constituents, has been argued to instantiate a movement operation and has been instrumental in arguing for a configurational approach to Japanese. Japanese passives and causatives, which are formed through agglutinative morphology, each exhibit different types: direct vs. indirect passives and lexical vs. syntactic causatives. Their syntactic and semantic properties have posed challenges to and motivations for a variety of approaches to these well-studied constructions in the world’s languages. Taken together, the empirical analyses of Japanese and their theoretical and conceptual implications have made a tremendous contribution to linguistic research.

Article

Corpora are an all-important resource in linguistics, as they constitute the primary source for large-scale examples of language usage. This has been even more evident in recent years, with the increasing availability of texts in digital format leading more and more corpus linguistics toward a “big data” approach. As a consequence, the quantitative methods adopted in the field are becoming more sophisticated and various. When it comes to morphology, corpora represent a primary source of evidence to describe morpheme usage, and in particular how often a particular morphological pattern is attested in a given language. There is hence a tight relation between corpus linguistics and the study of morphology and the lexicon. This relation, however, can be considered bi-directional. On the one hand, corpora are used as a source of evidence to develop metrics and train computational models of morphology: by means of corpus data it is possible to quantitatively characterize morphological notions such as productivity, and corpus data are fed to computational models to capture morphological phenomena at different levels of description. On the other hand, morphology has also been applied as an organization principle to corpora. Annotations of linguistic data often adopt morphological notions as guidelines. The resulting information, either obtained from human annotators or relying on automatic systems, makes corpora easier to analyze and more convenient to use in a number of applications.

Article

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Knut Tarald Taraldsen

This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 [1970], 373–398).

Article

Some of the basic terminology for the major entities in morphological study is introduced, focusing on the word and elements within the word. This is done in a way which is deliberately introductory in nature and omits a great deal of detail about the elements that are introduced.

Article

Arto Anttila

Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-ess’ (Finnish) and arzén-nak ~ arzén-nek ‘arsenic-dat’ (Hungarian). Several questions arise. Is the variation random or in some way systematic? Where is the variation possible? Is it limited to specific lexical items? Is the choice predictable to some extent? Are the observed statistical patterns dictated by universal constraints or learned from the ambient data? The analyses illustrate the usefulness of recent advances in the technological infrastructure of linguistics, in particular the constantly improving computational tools.