1-20 of 88 Results  for:

  • Linguistic Theories x
Clear all

Article

Yvan Rose, Laetitia Almeida, and Maria João Freitas

The field of study on the acquisition of phonological productive abilities by first-language learners in the Romance languages has been largely focused on three main languages: French, Portuguese, and Spanish, including various dialects of these languages spoken in Europe as well as in the Americas. In this article, we provide a comparative survey of this literature, with an emphasis on representational phonology. We also include in our discussion observations from the development of Catalan and Italian, and mention areas where these languages, as well as Romanian, another major Romance language, would provide welcome additions to our cross-linguistic comparisons. Together, the various studies we summarize reveal intricate patterns of development, in particular concerning the acquisition of consonants across different positions within the syllable, the word, and in relation to stress, documented from both monolingual and bilingual first-language learners can be found. The patterns observed across the different languages and dialects can generally be traced to formal properties of phone distributions, as entailed by mainstream theories of phonological representation, with variations also predicted by more functional aspects of speech, including phonetic factors and usage frequency. These results call for further empirical studies of phonological development, in particular concerning Romanian, in addition to Catalan and Italian, whose phonological and phonetic properties offer compelling grounds for the formulation and testing of models of phonology and phonological development.

Article

Malka Rappaport Hovav

Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.

Article

Philip Rubin

Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis. Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.

Article

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.

Article

Franz Rainer

Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking. In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features). Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.

Article

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Case  

Andrej L. Malchukov

Morphological case is conventionally defined as a system of marking of a dependent nominal for the type of relationship they bear to their heads. While most linguists would agree with this definition, in practice it is often a matter of controversy whether a certain marker X counts as case in language L, or how many case values language L features. First, the distinction between morphological cases and case particles/adpositions is fuzzy in a cross-linguistic perspective. Second, the distinctions between cases can be obscured by patterns of case syncretism, leading to different analyses of the underlying system. On the functional side, it is important to distinguish between syntactic (structural), semantic, and “pragmatic” cases, yet these distinctions are not clear-cut either, as syntactic cases historically arise from the two latter sources. Moreover, case paradigms of individual languages usually show a conflation between syntactic, semantic, and pragmatic cases (see the phenomenon of “focal ergativity,” where ergative case is used when the A argument is in focus). The composition of case paradigms can be shown to follow a certain typological pattern, which is captured by case hierarchy, as proposed by Greenberg and Blake, among others. Case hierarchy constrains the way how case systems evolve (or are reduced) across languages and derives from relative markedness and, ultimately, from frequencies of individual cases. The (one-dimensional) case hierarchy is, however, incapable of capturing all recurrent polysemies of individual case markers; rather, such polysemies can be represented through a more complex two-dimensional hierarchy (semantic map), which can also be given a diachronic interpretation.

Article

Jessica Coon and Clint Parker

The phenomenon of case has been studied widely at both the descriptive and theoretical levels. Typological work on morphological case systems has provided a picture of the variability of case cross-linguistically. In particular, languages may differ with respect to whether or not arguments are marked with overt morphological case, the inventory of cases with which they may be marked, and the alignment of case marking (e.g., nominative-accusative vs. ergative-absolutive). In the theoretical realm, not only has morphological case been argued to play a role in multiple syntactic phenomena, but current generative work also debates the role of abstract case (i.e., Case) in the grammar: abstract case features have been proposed to underlie morphological case, and to license nominals in the derivation. The phenomenon of case has been argued to play a role in at least three areas of the syntax reviewed here: (a) agreement, (b) A-movement, and (c) A’-movement. Morphological case has been shown to determine a nominal argument’s eligibility to participate in verbal agreement, and recent work has argued that languages vary as to whether movement to subject position is case-sensitive. As for case-sensitive A’-movement, recent literature on ergative extraction restrictions debates whether this phenomenon should be seen as another instance of “case discrimination” or whether the pattern arises from other properties of ergative languages. Finally, other works discussed here have examined agreement and A’-extraction patterns in languages with no visible case morphology. The presence of patterns and typological gaps—both in languages with overt morphological case and in those without it—lends support to the relevance of abstract case in the syntax.

Article

Yingying Wang and Haihua Pan

Among Chinese reflexives, simple reflexive ziji ‘self’ is best known not only for its licensing of long-distance binding that violates Binding Condition A in the canonical Binding Theory, but also for its special properties such as the asymmetry of the blocking effect. Different researchers have made great efforts to explain such phenomena from a syntactic or a semantic-pragmatic perspective, though up to now there is still no consensus on what the mechanism really is. Besides being used as an anaphor, ziji can also be used as a generic pronoun and an intensifier. Moreover, Chinese has other simple reflexives such as zishen ‘self-body’ and benren ‘person proper’, and complex ones like ta-ziji ‘himself’ and ziji-benshen ‘self-self’. These reflexives again indicate the complexity of the anaphoric system of Chinese, which calls for further investigation so that we can have a better understanding of the diversity of the binding patterns in natural languages.

Article

Jisheng Zhang

Chinese is generally considered a monosyllabic language in that one Chinese character corresponds to one syllable and vice versa, and most characters can be used as free morphemes, although there is a tendency for words to be disyllabic. On the one hand, the syllable structure of Chinese is simple, as far as permissible sequences of segments are concerned. On the other hand, complexities arise when the status of the prenuclear glide is concerned and with respect to the phonotactic constraints between the segments. The syllabic affiliation of the prenuclear glide in the maximal CGVX Chinese syllable structure has long been a controversial issue. Traditional Chinese phonology divides the syllable into shengmu (C) and yunmu, the latter consisting of medial (G), nucleus (V), and coda (X), which is either a high vowel (i/u) or a nasal (n/ŋ). This is known as the sheng-yun model, which translates to initial-final in English (IF in short). The traditional Chinese IF syllable model differs from the onset-rhyme (OR) syllable structure model in several aspects. In the former, the initial consists only of one consonant, excluding the glide, and the final—that is, everything after the initial consonant—is not the poetic rhyming unit which excludes the prenuclear glide; whereas in the latter, the onset includes a glide and the rhyme–that is, everything after the onset—is the poetic rhyming unit. The Chinese traditional IF syllable model is problematic in itself. First, the final is ternary branching, which is not compatible with the binary principle in contemporary linguistics. Second, the nucleus+coda, as the poetic rhyming unit, is not structured as a constituent. Accordingly, the question arises of whether Chinese syllables can be analyzed in the OR model. Many attempts have been made to analyze the Chinese prenuclear glide in the light of current phonological theories, particularly in the OR model, based on phonetic and phonological data on Chinese. Some such studies have proposed that the prenuclear glide occupies the second position in the onset. Others have proposed that the glide is part of the nucleus. Yet, others regard the glide as a secondary articulation of the onset consonant, while still others think of the glide as an independent branch directly linking to the syllable node. Also, some have proposed an IF model with initial for shengmu and final for yunmu, which binarily branches into G(lide) and R(hyme), consisting of N(ucleus) and C(oda). What is more, some have put forward a universal X-bar model of the syllable to replace the OR model, based on a syntactic X-bar structure. So far, there has been no authoritative finding that has conclusively decided the Chinese syllable structure. Moreover, the syllable is the cross-linguistic domain for phonotactics . The number of syllables in Chinese is very much smaller than that in many other languages mainly because of the complicated phonotactics of the language, which strictly govern the segmental relations within CGVX. In the X-bar syllable structure, the Chinese phonotactic constraints which configure segmental relations in the syllable domain mirror the theta rules which capture the configurational relations between specifier and head and head and complement in syntax. On the whole, analysis of the complexities of the Chinese syllable will shed light on the cross-linguistic representation of syllable structure, making a significant contribution to phonological typology in general.

Article

Clauses can fulfill various functions in discourse; in most cases, the form of the clause is indicative of its discourse function. The discourse functions (such as making statements or asking questions) are referred to as speech acts, while the grammatical counterparts are referred to as clause types (such as declarative or interrogative). Declarative clauses are canonical (that is, they are syntactically more basic than non-canonical ones): they are by default used to express statements, and they represent the most unmarked word order configuration(s) in a language. Other clause types, such as interrogatives, can be distinguished by various means, including changes in the intonation pattern, different (non-canonical) word orders, the use of morphosyntactic markers (such as interrogative words), as well as combinations of these, as can be observed across Germanic. The explicit marking of clause types is referred to as clause typing, and it affects both the syntactic component of the grammar and its interfaces. Apart from main clauses, which can correspond to complete utterances, there are also embedded clauses, which are contained within another clause, referred to as the matrix clause: matrix clauses can be either main clauses or embedded clauses. Embedded clauses may be argument clauses, in which case they are selected by a matrix element (such as a verb), but they can also be adjunct clauses, which modify some element in the matrix clause (or the entire matrix clause). Embedded clauses fall into various clause types. Some of these can also be main clauses, such as declarative clauses or interrogative clauses. Other embedded clause types do not occur as main clauses, as is the case for relative clauses or comparative clauses. Clause typing in embedded clauses has two major aspects: embedded clauses are distinguished from matrix clauses and from other embedded clause types. Main clauses can be typed in various—syntactic and non-syntactic—ways, but Germanic languages type embedded clauses by morphosyntactic means intonation plays little, if any, role. These morphosyntactic markers fall into various categories according to what roles they fulfill in the clause. Germanic languages show considerable variation in morphosyntactic markers, depending on the clause type and the variety, and in many cases, such markers can also co-occur, resulting in complex left peripheries.

Article

Clitics can be defined as prosodically defective function words. They can belong to a number of syntactic categories, such as articles, pronouns, prepositions, complementizers, negative adverbs, or auxiliaries. They do not generally belong to open classes, like verbs, nouns, or adjectives. Their prosodically defective character is most often manifested by the absence of stress, which in turn correlates with vowel reduction in those languages that have it independently; sometimes the clitic can be just a consonant or a consonant cluster, with no vowel. This same prosodically defective character forces them to attach either to the word that follows them (proclisis) or to the word that precedes them (enclisis); in some cases they even appear inside a word (mesoclisis or endoclisis). The word to which a clitic attaches is called the host. In some languages (like some dialects of Italian or Catalan) enclitics can surface as stressed, but the presence of stress can be argued to be the result of assignment of stress to the host-clitic complex, not to the clitic itself. One consequence of clitics being prosodically defective is that they cannot be the sole element of an utterance, for instance as an answer to some question; they need to always appear with a host. A useful distinction is that between simple clitics and special clitics. Simple clitics often have a nonclitic variant and appear in the expected syntactic position for nonclitics of their syntactic category. Much more attention has been paid in the literature to special clitics. Special clitics appear in a designated position within the clause or within the noun phrase (or determiner phrase). In several languages certain clitics must appear in second position, within the clause, as in most South Slavic languages, or within the noun phrase, as in Kwakw'ala. The pronominal clitics of Romance languages or Greek must have the verb as a host and appear in a position different from the full noun phrase. A much debated question is whether the position of special clitics is the result of syntactic movement, or whether other factors, morphological or phonological, intervene as well or are the sole motivation for their position. Clitics can also cluster, with some languages allowing only sequences of two clitics, and other languages allowing longer sequences. Here one relevant question is what determines the order of the clitics, with the main avenues of analysis being approaches based on syntactic movement, approaches based on the types of morphosyntactic features each clitic has, and approaches based on templates. An additional issue concerning clitic clusters is the incompatibility between specific clitics when combined and the changes that this incompatibility can provoke in the form of one or more of the clitics. Combinations of identical or nearly identical clitics are often disallowed, and the constraint known as the Person-Case Constraint (PCC) disallows combinations of clitics with a first or second person accusative clitic (a direct object, DO, clitic) and a third person (and sometimes also first or second person) dative clitic (an indirect object, IO, clitic). In all these cases either one of the clitics surfaces with the form of another clitic or one of the clitics does not surface; sometimes there is no possible output. Here again both syntactic and morphological approaches have been proposed.

Article

Cognitive semantics (CS) is an approach to the study of linguistic meaning. It is based on the assumption that the human linguistic capacity is part of our cognitive abilities, and that language in general and meaning in particular can therefore be better understood by taking into account the cognitive mechanisms that control the conceptual and perceptual processing of extra-linguistic reality. Issues central to CS are (a) the notion of prototype and its role in the description of language, (b) the nature of linguistic meaning, and (c) the functioning of different types of semantic relations. The question concerning the nature of meaning is an issue that is particularly controversial between CS on the one hand and structuralist and generative approaches on the other hand: is linguistic meaning conceptual, that is, part of our encyclopedic knowledge (as is claimed by CS), or is it autonomous, that is, based on abstract and language-specific features? According to CS, the most important types of semantic relations are metaphor, metonymy, and different kinds of taxonomic relations, which, in turn, can be further broken down into more basic associative relations such as similarity, contiguity, and contrast. These play a central role not only in polysemy and word formation, that is, in the lexicon, but also in the grammar.

Article

Dany Amiot and Edwige Dugas

Word-formation encompasses a wide range of processes, among which we find derivation and compounding, two processes yielding productive patterns which enable the speaker to understand and to coin new lexemes. This article draws a distinction between two types of constituents (suffixes, combining forms, splinters, affixoids, etc.) on the one hand and word-formation processes (derivation, compounding, blending, etc.) on the other hand but also shows that a given constituent can appear in different word-formation processes. First, it describes prototypical derivation and compounding in terms of word-formation processes and of their constituents: Prototypical derivation involves a base lexeme, that is, a free lexical elements belonging to a major part-of-speech category (noun, verb, or adjective) and, very often, an affix (e.g., Fr. laverV ‘to wash’ > lavableA ‘washable’), while prototypical compounding involves two lexemes (e.g., Eng. rainN + fallV > rainfallN ). The description of these prototypical phenomena provides a starting point for the description of other types of constituents and word-formation processes. There are indeed at least two phenomena which do not meet this description, namely, combining forms (henceforth CFs) and affixoids, and which therefore pose an interesting challenge to linguistic description, be it synchronic or diachronic. The distinction between combining forms and affixoids is not easy to establish and the definitions are often confusing, but productivity is a good criterion to distinguish them from each other, even if it does not answer all the questions raised by bound forms. In the literature, the notions of CF and affixoid are not unanimously agreed upon, especially that of affixoid. Yet this article stresses that they enable us to highlight, and even conceptualize, the gradual nature of linguistic phenomena, whether from a synchronic or a diachronic point of view.

Article

Jane Chandlee and Jeffrey Heinz

Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.

Article

Isabel Oltra-Massuet

Conjugation classes have been defined as the set of all forms of a verb that spell out all possible morphosyntactic categories of person, number, tense, aspect, mood, and/or other additional categories that the language expresses in verbs. Theme vowels instantiate conjugation classes as purely morphological markers; that is, they determine the verb’s morphophonological surface shape but not its syntactic or semantic properties. They typically split the vocabulary items of the category verb into groups that spellout morphosyntactic and morphosemantic feature specifications with the same inflectional affixes. The bond between verbs and their conjugational marking is idiosyncratic, and cannot be established on semantic, syntactic, or phonological grounds, although there have been serious attempts at finding a systematic correlation. The existence of theme vowels and arbitrary conjugation classes has been taken by lexicalist theories as empirical evidence to argue against syntactic approaches to word formation and are used as one of the main arguments for the autonomy of morphology. They further raise questions on the nature of basic morphological notions such as stems or paradigms and serve as a good empirical ground for theories of allomorphy and syncretism, or to test psycholinguistic and neurolinguistic theories of productivity, full decomposition, and storage. Conjugations and their instantiation via theme vowels may also be a challenge for theories of first language acquisition and the learning of morphological categories devoid of any semantic meaning or syntactic alignment that extend to second language acquisition as well. Thus, analyzing their nature, their representation, and their place in grammar is crucial as the approach to these units can have profound effects on linguistic theory and the architecture of grammar.

Article

Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.

Article

Geert Booij

Construction Morphology is a theory of word structure in which the complex words of a language are analyzed as constructions, that is, systematic pairings of form and meaning. These pairings are analyzed within a Tripartite Parallel Architecture conception of grammar. This presupposes a word-based approach to the analysis of morphological structure and a strong dependence on paradigmatic relations between words. The lexicon contains both words and the constructional schemas they are instantiations of. Words and schemas are organized in a hierarchical network, with intermediate layers of subschemas. These schemas have a motivating function with respect to existing complex words and specify how new complex words can be formed. The consequence of this view of morphology is that there is no sharp boundary between lexicon and grammar. In addition, the use of morphological patterns may also depend on specific syntactic constructions (construction-dependent morphology). This theory of lexical relatedness also provides insight into language change such as the use of obsolete case markers as markers of specific constructions, the change of words into affixes, and the debonding of word constituents into independent words. Studies of language acquisition and word processing confirm this view of the lexicon and the nature of lexical knowledge. Construction Morphology is also well equipped for dealing with inflection and the relationships between the cells of inflectional paradigms, because it can express how morphological schemas are related paradigmatically.

Article

Martina Werner

In Germanic languages, conversion is seen as a change in category (i.e., syntactic category, word class, part of speech) without (overt) affixation. Conversion is attested in all Germanic languages. The definition of conversion as transposition or as derivation with a so-called zero-affix, which is responsible for the word-class change, depends on the language-specific part-of-speech system as well as, as often argued, the direction of conversion. Different types of conversion (e.g., from adjective to noun) are attested in Germanic languages, which differ especially semantically from each other. Although minor conversion types are attested, the main conversion types in Germanic languages are verb-to-noun conversion (deverbal nouns), adjective-to-noun conversion (deadjectival nouns), and noun-to-verb conversion (denominal verbs). Due to the characteristics of word-class change, conversion displays many parallels to derivational processes such as the directionality of category change and the preservation of lexical and grammatical properties of the underlying stem such as argument structure. Some, however, have argued that conversion does not exist as a specific rule and is only a symptom of lexical relisting. Another question is whether two such words are related by a conversion process that is still productive or are lexically listed relics of a now unproductive process. Furthermore, the direction of conversion of present-day Germanic, for example, the identification of the word class of the input before being converted, is unclear sometimes. Generally, deverbal and deadjectival nominal conversion in Germanic languages is semantically more transparent than denominal and deadjectival verbal conversion: despite the occurrence of some highly frequent, but lexicalized counterexamples, the semantic impact of conversion is only sometimes predictable, slightly more in the nominal domain than in the verbal domain. The semantics of verb formation by conversion (e.g., whether conversion leads to causative readings or not) is hardly predictable. Overall, conversion in Germanic is considered a process with multiple linkages to other morphological phenomena such as derivation, back-formation, and inflectional categories such as grammatical gender. Due to the lack of formal markers, conversion is considered non-iconic. The different kinds of conversions are merely based on language-specific mechanisms, but what all Germanic languages share at least is the ability to form nominal conversion, which is independent of their typological characteristics as isolating-analytic versus inflectional-fusional languages. This is surprising given the crosslinguistic prevalence of verbal conversion in the languages of the world.

Article

William F. Hanks

Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus, (1)A:Oh, there’s that guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy) (2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer) In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?