1-20 of 109 Results  for:

  • Linguistic Theories x
Clear all

Article

Acquisition of L1 Phonology in the Romance Languages  

Yvan Rose, Laetitia Almeida, and Maria João Freitas

The field of study on the acquisition of phonological productive abilities by first-language learners in the Romance languages has been largely focused on three main languages: French, Portuguese, and Spanish, including various dialects of these languages spoken in Europe as well as in the Americas. In this article, we provide a comparative survey of this literature, with an emphasis on representational phonology. We also include in our discussion observations from the development of Catalan and Italian, and mention areas where these languages, as well as Romanian, another major Romance language, would provide welcome additions to our cross-linguistic comparisons. Together, the various studies we summarize reveal intricate patterns of development, in particular concerning the acquisition of consonants across different positions within the syllable, the word, and in relation to stress, documented from both monolingual and bilingual first-language learners can be found. The patterns observed across the different languages and dialects can generally be traced to formal properties of phone distributions, as entailed by mainstream theories of phonological representation, with variations also predicted by more functional aspects of speech, including phonetic factors and usage frequency. These results call for further empirical studies of phonological development, in particular concerning Romanian, in addition to Catalan and Italian, whose phonological and phonetic properties offer compelling grounds for the formulation and testing of models of phonology and phonological development.

Article

Agreement in the Romance Languages  

Michele Loporcaro

This article examines agreement in the Romance languages in light of current studies and with the toolkit of linguistic typology. I will first introduce the definition of agreement assumed in the article, demonstrating its superiority to the alternatives proposed in the literature, and then move on to consider empirical data from all branches of the Romance language family, illustrating how agreement works in all its components. This will require dealing with, in order, the controllers and targets of agreement, then the morphosyntactic features that are active in the agreement rules, then the conditions that may constrain those rules, and finally the syntactic domains in which agreement takes place. In the first half of this overview, the focus will be mainly on what is common to all Romance languages, while in the second half I will concentrate on the phenomena of agreement that are remarkable, in that they are rare and/or unexpected, from a crosslinguistic perspective. It will become clear from this survey that there is no dearth of such unusual phenomena, and that the Romance language family, especially through its lesser-known nonstandard local vernaculars (which will be treated here with equal dignity to the major literary languages), holds in store considerable richness that must be taken into serious consideration by any language typologist interested in agreement.

Article

Alignment and Word Order in the Romance Languages  

Francesco Rovai

The term “alignment” refers to the formal realization of the argument structure of the clause, that is, the ways in which the core arguments of the predicate are encoded by means of three main morphosyntactic devices: nominal case marking (morphological case, adpositions), verb marking systems (verbal agreement, pronominal affixes, auxiliaries, voice distinctions, etc.), and word order. The relative importance of these mechanisms of argument coding may considerably vary from language to language. In the Romance family, a major role is played by finite verb agreement and, to a lesser extent, auxiliary selection, participial agreement, voice distinctions, and word order, depending on the language/variety. Most typically, both transitive and intransitive subjects share the same formal coding (they control finite verb agreement and precede the verb in the basic word order) and are distinguished from direct objects (which do not control finite verb agreement and follow the verb in the basic word order). This arrangement of the argument structure is traditionally known as “nominative/accusative” alignment and can be easily identified as the main alignment of the Romance languages. Note that, with very few exceptions, nominal case marking is instead “neutral,” since no overt morphological distinction is made between subject and object arguments after the loss of the Latin case system. However, although the Romance languages can legitimately be associated with an accusative alignment, it must be borne in mind that, whatever the property selected, natural languages speak against an all-encompassing, holistic typology. A language “belongs” to an alignment type only insofar as it displays a significantly above-average frequency of clause structures with that kind of argument coding, but this does not exclude the existence of several grammatical domains that partake of different alignments. In the Romance family, minor patterns are attested that are not consistent with an accusative alignment. In part, they depend on robust crosslinguistic tendencies in the distribution of the different alignment types when they coexist in the same language. In part, they reflect phenomena of morphosyntactic realignment that can be traced back to the transition from Latin to Romance, when, alongside the dominant accusative alignment of the classical language, Late Latin developed an active alignment in some domains of the grammar—a development that has its roots in Classical and Early Latin. Today, the Romance languages preserve traces of this intermediate stage, but in large part, the signs of it have been replaced with novel accusative structures. In particular, at the level of the sentence, there emerges an accusative-aligned word order, with the preverbal position realizing the default “subject” position and the postverbal position instantiating the default “object” position.

Article

Argument Realization in Syntax  

Malka Rappaport Hovav

Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.

Article

Arthur Abramson  

Philip Rubin

Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis. Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.

Article

Articulatory Phonology  

Marianne Pouplier

One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.

Article

Binding in Germanic  

Eric Reuland and Martin Everaert

All languages have expressions, typically pronominals and anaphors, that may or must depend for their interpretation on another expression, their antecedent. When such a dependency is subject to structural conditions, it reflects binding. Although there is considerable variation in binding patterns cross-linguistically, in fact, variation is along a limited set of parameters. The Germanic languages exemplify some of the main factors involved. In Germanic, third-person pronominals generally do not allow binding by a co-argument. However, in Frisian and Afrikaans, they do, being embedded in a richer structure than meets the eye. In Continental West Germanic and Scandinavian, anaphors come in two types: simplex anaphors (SE-anaphors)—deficient for number and gender—and complex anaphors (SELF-anaphors). These typically consist of a pronominal or SE-anaphor combined with an element like Dutch zelf ‘self’ or one of its cognates. In all the Germanic languages SELF-anaphors are bound in their local domain—approximately the domain of their nearest subject—except in a few identifiable positions, where they are interpreted logophorically. That is, they accept a non-local antecedent, provided this element holds the perspective of the sentence. The distribution of SE-anaphors involves three different conditions. First, they can be bound by a co-argument only if the verb belongs to a restricted class, which allows syntactic detransitivization. Second, in general, SE-anaphors allow non-local binding. But the conditions differ among subgroups. In Dutch and German, they can only be bound non-locally when contained in a causative or perception verb complement or a small clause. In Mainland Scandinavian, non-local binding is, in principle, available to all infinitival clauses (subject to some dialectal variation). For instance, in some varieties of Norwegian, referentiality of intervening subjects restricts binding; in other varieties, the restricting factor is not “finiteness” but “being specified for tense.” Third, in Icelandic long-distance antecedents beyond the infinitival domain are licensed by a subjunctive, together with the requirement that the antecedent holds the perspective. Faroese largely patterns like Icelandic, although lacking a subjunctive. However, the class of verbs that allow this pattern coincides with the class of verbs in Icelandic that have a subjunctive complement. Non-local binding of SE-anaphors is sensitive to the requirement that the antecedent be animate, but the languages show differences in the details. Unlike the West Germanic languages, the Scandinavian languages all have a possessive reflexive in third person. In general, their distribution appears to be quite close to that of SE-anaphors, but this is subject to dialectal variation, with various differences in the details.

Article

Blocking  

Franz Rainer

Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking. In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features). Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.

Article

Bracketing Paradoxes in Morphology  

Heather Newell

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Case  

Andrej L. Malchukov

Morphological case is conventionally defined as a system of marking of a dependent nominal for the type of relationship they bear to their heads. While most linguists would agree with this definition, in practice it is often a matter of controversy whether a certain marker X counts as case in language L, or how many case values language L features. First, the distinction between morphological cases and case particles/adpositions is fuzzy in a cross-linguistic perspective. Second, the distinctions between cases can be obscured by patterns of case syncretism, leading to different analyses of the underlying system. On the functional side, it is important to distinguish between syntactic (structural), semantic, and “pragmatic” cases, yet these distinctions are not clear-cut either, as syntactic cases historically arise from the two latter sources. Moreover, case paradigms of individual languages usually show a conflation between syntactic, semantic, and pragmatic cases (see the phenomenon of “focal ergativity,” where ergative case is used when the A argument is in focus). The composition of case paradigms can be shown to follow a certain typological pattern, which is captured by case hierarchy, as proposed by Greenberg and Blake, among others. Case hierarchy constrains the way how case systems evolve (or are reduced) across languages and derives from relative markedness and, ultimately, from frequencies of individual cases. The (one-dimensional) case hierarchy is, however, incapable of capturing all recurrent polysemies of individual case markers; rather, such polysemies can be represented through a more complex two-dimensional hierarchy (semantic map), which can also be given a diachronic interpretation.

Article

Case Interactions in Syntax  

Jessica Coon and Clint Parker

The phenomenon of case has been studied widely at both the descriptive and theoretical levels. Typological work on morphological case systems has provided a picture of the variability of case cross-linguistically. In particular, languages may differ with respect to whether or not arguments are marked with overt morphological case, the inventory of cases with which they may be marked, and the alignment of case marking (e.g., nominative-accusative vs. ergative-absolutive). In the theoretical realm, not only has morphological case been argued to play a role in multiple syntactic phenomena, but current generative work also debates the role of abstract case (i.e., Case) in the grammar: abstract case features have been proposed to underlie morphological case, and to license nominals in the derivation. The phenomenon of case has been argued to play a role in at least three areas of the syntax reviewed here: (a) agreement, (b) A-movement, and (c) A’-movement. Morphological case has been shown to determine a nominal argument’s eligibility to participate in verbal agreement, and recent work has argued that languages vary as to whether movement to subject position is case-sensitive. As for case-sensitive A’-movement, recent literature on ergative extraction restrictions debates whether this phenomenon should be seen as another instance of “case discrimination” or whether the pattern arises from other properties of ergative languages. Finally, other works discussed here have examined agreement and A’-extraction patterns in languages with no visible case morphology. The presence of patterns and typological gaps—both in languages with overt morphological case and in those without it—lends support to the relevance of abstract case in the syntax.

Article

Chinese Dou Quantification  

Yuli Feng and Haihua Pan

Dou has been seen as a typical example of universal quantification and the point of departure in the formal study of quantification in Chinese. The constraints on dou’s quantificational structure, dou’s diverse uses, and the compatibility between dou and other quantificational expressions have further promoted the refinement of the theory of quantification and sparked debate over the semantic nature of dou. The universal quantificational approach holds that dou is a universal quantifier and explains its diverse uses as the effects produced by quantification on different sorts of entities and different ways of quantificational mapping. However, non-quantificational approaches, integrating the insights of degree semantics and focus semantics, take the scalar use as dou’s core semantics. The quantificational approach to dou can account for its meaning of exclusiveness and the interpretational differences engendered by dou when it associates with a wh-indeterminate to its left or to its right, whereas non-quantificational approaches cannot determine the interpretational differences caused by rightward and leftward association and cannot explain the exclusive use of dou. Despite the differences, the various approaches to dou, quantificational or non-quantificational, have far-reaching theoretical significance for understanding the mechanism of quantification in natural language.

Article

Liheci ‘Separable Words’ in Mandarin Chinese  

Kuang Ye and Haihua Pan

Liheci ‘separable words’ is a special phenomenon in Mandarin Chinese, and it refers to an intransitive verb with two or more syllables that allows the insertion of syntactic modifiers or an argument in between the first syllable and the second or the rest of syllables with the help of the nominal modifier marker de. There are two major groups of Liheci: those stored in the lexicon, such as bangmang ‘help’, lifa ‘haircut’, and shenqi ‘anger’, and those derived in syntax through noun-to-verb incorporation, such as chifan ‘eat meal’, leiqiang ‘build wall’, in which fan ‘meal’ and qiang ‘wall’ are incorporated into chi ‘eat’ and lei ‘build’, respectively, to function as temporary verbal compounds. The well-known behavior of Liheci is that it can be separated by nominal modifiers or a syntactic argument. For example, bangmang ‘help’ can be used to form a verb phrase bang Lisi-de mang ‘give Lisi a help’ by inserting Lisi and a nominal modifier marker, de, between bang and mang, with bang being understood as the predicate and Lisi-de mang as the object. Although Lisi appears as a possessor marked by de, it should be understood as the theme object of the compound verb. In similar ways, the syntactic–semantic elements such as agent, theme, adjectives, measure phrases, relative clauses, and the like can all be inserted between the two components of bangmang, deriving verb phrases like (Zhangsan) bang Zhangsan-de mang ‘(Zhangsan) do Zhangsan’s help’, where Zhangsan is the agent; bang-le yi-ci mang ‘help once’, where yi-ci is a measure phrase; and bang bieren bu xiang bang de mang ‘give a help that others don’t want to give’, where bieren bu xiang bang is a relative clause. The same insertions can be found in Liheci formed in syntax. For example, chi liang-ci fan ‘eat two time’s meal’ (eat meals twice), lei san-tian qiang ‘build three day’s wall’ (build walls for three days). There are three syntactic-semantic properties exhibited in verb phrases formed with Liheci: first, possessors being understood as Liheci’s logical argument; second, interdependent relation between the predicate and the complement; and, third, obligatory use of verbal classifiers instead of nominal classifiers. In this article, first, five influential analyses in the literature are reviewed, pointing out their strengths and weaknesses. Then, the cognate object approach is discussed. Under this approach, Lihecis are found to be intransitive verbs that are capable of taking nominalized reduplicates of themselves as their cognate objects. After a complementary deletion on the verb and its reduplicate object in the Phonetic Form (PF), all the relevant verb phrases can be well derived, with no true separation involved in the derivation, as all the copies of Liheci in question remain intact all along. After a discussion of the relevant syntactic structures, it is shown that with this syntactic capacity, all participants involved in the events can be successfully accommodated and correctly interpreted. The advantage can be manifested in six aspects, demonstrating that this proposal fares much better than other approaches.

Article

Chinese Reflexives  

Yingying Wang and Haihua Pan

Among Chinese reflexives, simple reflexive ziji ‘self’ is best known not only for its licensing of long-distance binding that violates Binding Condition A in the canonical Binding Theory, but also for its special properties such as the asymmetry of the blocking effect. Different researchers have made great efforts to explain such phenomena from a syntactic or a semantic-pragmatic perspective, though up to now there is still no consensus on what the mechanism really is. Besides being used as an anaphor, ziji can also be used as a generic pronoun and an intensifier. Moreover, Chinese has other simple reflexives such as zishen ‘self-body’ and benren ‘person proper’, and complex ones like ta-ziji ‘himself’ and ziji-benshen ‘self-self’. These reflexives again indicate the complexity of the anaphoric system of Chinese, which calls for further investigation so that we can have a better understanding of the diversity of the binding patterns in natural languages.

Article

Chinese Syllable Structure  

Jisheng Zhang

Chinese is generally considered a monosyllabic language in that one Chinese character corresponds to one syllable and vice versa, and most characters can be used as free morphemes, although there is a tendency for words to be disyllabic. On the one hand, the syllable structure of Chinese is simple, as far as permissible sequences of segments are concerned. On the other hand, complexities arise when the status of the prenuclear glide is concerned and with respect to the phonotactic constraints between the segments. The syllabic affiliation of the prenuclear glide in the maximal CGVX Chinese syllable structure has long been a controversial issue. Traditional Chinese phonology divides the syllable into shengmu (C) and yunmu, the latter consisting of medial (G), nucleus (V), and coda (X), which is either a high vowel (i/u) or a nasal (n/ŋ). This is known as the sheng-yun model, which translates to initial-final in English (IF in short). The traditional Chinese IF syllable model differs from the onset-rhyme (OR) syllable structure model in several aspects. In the former, the initial consists only of one consonant, excluding the glide, and the final—that is, everything after the initial consonant—is not the poetic rhyming unit which excludes the prenuclear glide; whereas in the latter, the onset includes a glide and the rhyme–that is, everything after the onset—is the poetic rhyming unit. The Chinese traditional IF syllable model is problematic in itself. First, the final is ternary branching, which is not compatible with the binary principle in contemporary linguistics. Second, the nucleus+coda, as the poetic rhyming unit, is not structured as a constituent. Accordingly, the question arises of whether Chinese syllables can be analyzed in the OR model. Many attempts have been made to analyze the Chinese prenuclear glide in the light of current phonological theories, particularly in the OR model, based on phonetic and phonological data on Chinese. Some such studies have proposed that the prenuclear glide occupies the second position in the onset. Others have proposed that the glide is part of the nucleus. Yet, others regard the glide as a secondary articulation of the onset consonant, while still others think of the glide as an independent branch directly linking to the syllable node. Also, some have proposed an IF model with initial for shengmu and final for yunmu, which binarily branches into G(lide) and R(hyme), consisting of N(ucleus) and C(oda). What is more, some have put forward a universal X-bar model of the syllable to replace the OR model, based on a syntactic X-bar structure. So far, there has been no authoritative finding that has conclusively decided the Chinese syllable structure. Moreover, the syllable is the cross-linguistic domain for phonotactics . The number of syllables in Chinese is very much smaller than that in many other languages mainly because of the complicated phonotactics of the language, which strictly govern the segmental relations within CGVX. In the X-bar syllable structure, the Chinese phonotactic constraints which configure segmental relations in the syllable domain mirror the theta rules which capture the configurational relations between specifier and head and head and complement in syntax. On the whole, analysis of the complexities of the Chinese syllable will shed light on the cross-linguistic representation of syllable structure, making a significant contribution to phonological typology in general.

Article

Clause Types (and Clausal Complementation) in Germanic  

Julia Bacskai-Atkari

Clauses can fulfill various functions in discourse; in most cases, the form of the clause is indicative of its discourse function. The discourse functions (such as making statements or asking questions) are referred to as speech acts, while the grammatical counterparts are referred to as clause types (such as declarative or interrogative). Declarative clauses are canonical (that is, they are syntactically more basic than non-canonical ones): they are by default used to express statements, and they represent the most unmarked word order configuration(s) in a language. Other clause types, such as interrogatives, can be distinguished by various means, including changes in the intonation pattern, different (non-canonical) word orders, the use of morphosyntactic markers (such as interrogative words), as well as combinations of these, as can be observed across Germanic. The explicit marking of clause types is referred to as clause typing, and it affects both the syntactic component of the grammar and its interfaces. Apart from main clauses, which can correspond to complete utterances, there are also embedded clauses, which are contained within another clause, referred to as the matrix clause: matrix clauses can be either main clauses or embedded clauses. Embedded clauses may be argument clauses, in which case they are selected by a matrix element (such as a verb), but they can also be adjunct clauses, which modify some element in the matrix clause (or the entire matrix clause). Embedded clauses fall into various clause types. Some of these can also be main clauses, such as declarative clauses or interrogative clauses. Other embedded clause types do not occur as main clauses, as is the case for relative clauses or comparative clauses. Clause typing in embedded clauses has two major aspects: embedded clauses are distinguished from matrix clauses and from other embedded clause types. Main clauses can be typed in various—syntactic and non-syntactic—ways, but Germanic languages type embedded clauses by morphosyntactic means intonation plays little, if any, role. These morphosyntactic markers fall into various categories according to what roles they fulfill in the clause. Germanic languages show considerable variation in morphosyntactic markers, depending on the clause type and the variety, and in many cases, such markers can also co-occur, resulting in complex left peripheries.

Article

Clitics and Clitic Clusters in Morphology  

Eulalia Bonet

Clitics can be defined as prosodically defective function words. They can belong to a number of syntactic categories, such as articles, pronouns, prepositions, complementizers, negative adverbs, or auxiliaries. They do not generally belong to open classes, like verbs, nouns, or adjectives. Their prosodically defective character is most often manifested by the absence of stress, which in turn correlates with vowel reduction in those languages that have it independently; sometimes the clitic can be just a consonant or a consonant cluster, with no vowel. This same prosodically defective character forces them to attach either to the word that follows them (proclisis) or to the word that precedes them (enclisis); in some cases they even appear inside a word (mesoclisis or endoclisis). The word to which a clitic attaches is called the host. In some languages (like some dialects of Italian or Catalan) enclitics can surface as stressed, but the presence of stress can be argued to be the result of assignment of stress to the host-clitic complex, not to the clitic itself. One consequence of clitics being prosodically defective is that they cannot be the sole element of an utterance, for instance as an answer to some question; they need to always appear with a host. A useful distinction is that between simple clitics and special clitics. Simple clitics often have a nonclitic variant and appear in the expected syntactic position for nonclitics of their syntactic category. Much more attention has been paid in the literature to special clitics. Special clitics appear in a designated position within the clause or within the noun phrase (or determiner phrase). In several languages certain clitics must appear in second position, within the clause, as in most South Slavic languages, or within the noun phrase, as in Kwakw'ala. The pronominal clitics of Romance languages or Greek must have the verb as a host and appear in a position different from the full noun phrase. A much debated question is whether the position of special clitics is the result of syntactic movement, or whether other factors, morphological or phonological, intervene as well or are the sole motivation for their position. Clitics can also cluster, with some languages allowing only sequences of two clitics, and other languages allowing longer sequences. Here one relevant question is what determines the order of the clitics, with the main avenues of analysis being approaches based on syntactic movement, approaches based on the types of morphosyntactic features each clitic has, and approaches based on templates. An additional issue concerning clitic clusters is the incompatibility between specific clitics when combined and the changes that this incompatibility can provoke in the form of one or more of the clitics. Combinations of identical or nearly identical clitics are often disallowed, and the constraint known as the Person-Case Constraint (PCC) disallows combinations of clitics with a first or second person accusative clitic (a direct object, DO, clitic) and a third person (and sometimes also first or second person) dative clitic (an indirect object, IO, clitic). In all these cases either one of the clitics surfaces with the form of another clitic or one of the clitics does not surface; sometimes there is no possible output. Here again both syntactic and morphological approaches have been proposed.

Article

Cognitive Semantics in the Romance Languages  

Ulrich Detges

Cognitive semantics (CS) is an approach to the study of linguistic meaning. It is based on the assumption that the human linguistic capacity is part of our cognitive abilities, and that language in general and meaning in particular can therefore be better understood by taking into account the cognitive mechanisms that control the conceptual and perceptual processing of extra-linguistic reality. Issues central to CS are (a) the notion of prototype and its role in the description of language, (b) the nature of linguistic meaning, and (c) the functioning of different types of semantic relations. The question concerning the nature of meaning is an issue that is particularly controversial between CS on the one hand and structuralist and generative approaches on the other hand: is linguistic meaning conceptual, that is, part of our encyclopedic knowledge (as is claimed by CS), or is it autonomous, that is, based on abstract and language-specific features? According to CS, the most important types of semantic relations are metaphor, metonymy, and different kinds of taxonomic relations, which, in turn, can be further broken down into more basic associative relations such as similarity, contiguity, and contrast. These play a central role not only in polysemy and word formation, that is, in the lexicon, but also in the grammar.

Article

Combining Forms and Affixoids in Morphology  

Dany Amiot and Edwige Dugas

Word-formation encompasses a wide range of processes, among which we find derivation and compounding, two processes yielding productive patterns which enable the speaker to understand and to coin new lexemes. This article draws a distinction between two types of constituents (suffixes, combining forms, splinters, affixoids, etc.) on the one hand and word-formation processes (derivation, compounding, blending, etc.) on the other hand but also shows that a given constituent can appear in different word-formation processes. First, it describes prototypical derivation and compounding in terms of word-formation processes and of their constituents: Prototypical derivation involves a base lexeme, that is, a free lexical elements belonging to a major part-of-speech category (noun, verb, or adjective) and, very often, an affix (e.g., Fr. laverV ‘to wash’ > lavableA ‘washable’), while prototypical compounding involves two lexemes (e.g., Eng. rainN + fallV > rainfallN ). The description of these prototypical phenomena provides a starting point for the description of other types of constituents and word-formation processes. There are indeed at least two phenomena which do not meet this description, namely, combining forms (henceforth CFs) and affixoids, and which therefore pose an interesting challenge to linguistic description, be it synchronic or diachronic. The distinction between combining forms and affixoids is not easy to establish and the definitions are often confusing, but productivity is a good criterion to distinguish them from each other, even if it does not answer all the questions raised by bound forms. In the literature, the notions of CF and affixoid are not unanimously agreed upon, especially that of affixoid. Yet this article stresses that they enable us to highlight, and even conceptualize, the gradual nature of linguistic phenomena, whether from a synchronic or a diachronic point of view.

Article

Computational Phonology  

Jane Chandlee and Jeffrey Heinz

Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.