1-20 of 90 Results  for:

Clear all

Article

James Myers

Acceptability judgments are reports of a speaker’s or signer’s subjective sense of the well-formedness, nativeness, or naturalness of (novel) linguistic forms. Their value comes in providing data about the nature of the human capacity to generalize beyond linguistic forms previously encountered in language comprehension. For this reason, acceptability judgments are often also called grammaticality judgments (particularly in syntax), although unlike the theory-dependent notion of grammaticality, acceptability is accessible to consciousness. While acceptability judgments have been used to test grammatical claims since ancient times, they became particularly prominent with the birth of generative syntax. Today they are also widely used in other linguistic schools (e.g., cognitive linguistics) and other linguistic domains (pragmatics, semantics, morphology, and phonology), and have been applied in a typologically diverse range of languages. As psychological responses to linguistic stimuli, acceptability judgments are experimental data. Their value thus depends on the validity of the experimental procedures, which, in their traditional version (where theoreticians elicit judgments from themselves or a few colleagues), have been criticized as overly informal and biased. Traditional responses to such criticisms have been supplemented in recent years by laboratory experiments that use formal psycholinguistic methods to collect and quantify judgments from nonlinguists under controlled conditions. Such formal experiments have played an increasingly influential role in theoretical linguistics, being used to justify subtle judgment claims or new grammatical models that incorporate gradience or lexical influences. They have also been used to probe the cognitive processes giving rise to the sense of acceptability itself, the central finding being that acceptability reflects processing ease. Exploring what this finding means will require not only further empirical work on the acceptability judgment process, but also theoretical work on the nature of grammar.

Article

The Romance languages are characterized by the existence of pronominal clitics. Third person pronominal clitics are often, but not always, homophonous with the definite determiner series in the same language. Both pronominal and determiner clitics emerge early in child acquisition, but their path of development varies depending on clitic type and language. While determiner clitic acquisition is quite homogeneous across Romance, there is wide cross-linguistic variation for pronominal clitics (accusative vs. partitive vs. dative, first/second person vs. third person); the observed differences in acquisition correlate with syntactic differences between the pronouns. Acquisition of pronominal clitics is also affected if a language has both null objects and object clitics, as in European Portuguese. The interpretation of Romance pronominal clitics is generally target-like in child grammar, with absence of Pronoun Interpretation problems like those found in languages with strong pronouns. Studies on developmental language impairment show that, as in typical development, clitic production is subject to cross-linguistic variation. The divergent performance between determiners and pronominals in this population points to the syntactic (as opposed to phonological) nature of the deficit.

Article

K. A. Jayaseelan

The Dravidian languages have a long-distance reflexive anaphor taan . (It is taan in Tamil and Malayalam, taanu in Kannada and tanu in Telugu.) As is the case with other long-distance anaphors, it is subject-oriented; it is also [+human] and third person. Interestingly, it is infelicitous if bound within the minimal clause when it is an argument of the verb. (That is, it seems to obey Principle B of the binding theory.) Although it is subject-oriented in the normal case, it can be bound by a non-subject if the verb is a “psych predicate,” that is, a predicate that denotes a feeling; in this case, it can be bound by the experiencer of the feeling. Again, in a discourse that depicts the thoughts, feelings, or point of view of a protagonist—the so-called “logophoric contexts”—it can be coreferential with the protagonist even if the latter is mentioned only in the preceding discourse (not within the sentence). These latter facts suggest that the anaphor is in fact coindexed with the perspective of the clause (rather than with the subject per se). In cases where this anaphor needs to be coindexed with the minimal subject (to express a meaning like ‘John loves himself’), the Dravidian languages exhibit two strategies to circumvent the Principle B effect. Malayalam adds an emphasis marker tanne to the anaphor; taan tanne can corefer with the minimal subject. This strategy parallels the strategy of European languages and East Asian languages (cf. Scandinavian seg selv). The three other major Dravidian languages—Tamil, Telugu, and Kannada—use a verbal reflexive: they add a light verb koL- (lit. ‘take’) to the verbal complex, which has the effect of reflexivizing the transitive predicate. (It either makes the verb intransitive or gives it a self-benefactive meaning.) The Dravidian languages also have reciprocal and distributive anaphors. These have bipartite structures. An example of a Malayalam reciprocal anaphor is oral … matte aaL (‘one person … other person’). The distributive anaphor in Malayalam has the form awar-awar (‘they-they’); it is a reduplicated pronoun. The reciprocals and distributives are strict anaphors in the sense that they apparently obey Principle A; they must be bound in the domain of the minimal subject. They are not subject-oriented. A noteworthy fact about the pronominal system of Dravidian is that the third person pronouns come in proximal-distal pairs, the proximal pronoun being used to refer to something nearby and the distal pronoun being used elsewhere.

Article

Japanese is a language where the grammatical status of arguments and adjuncts is marked exclusively by postnominal case markers, and various argument realization patterns can be assessed by their case marking. Since Japanese is categorized as a language of the nominative-accusative type typologically, the unmarked case-marking frame obtained for transitive predicates of the non-stative (or eventive) type is ‘nominative-accusative’. Nevertheless, transitive predicates falling into the stative class often have other case-marking alignments, such as ‘nominative-nominative’ and ‘dative-nominative’. Consequently, Japanese provides much more varying argument realization patterns than those expected from its typological character as a nominative-accusative language. In point of fact, argument marking can actually be much more elastic and variable, the variations being motivated by several linguistic factors. Arguments often have the option of receiving either syntactic or semantic case, with no difference in the logical or cognitive meaning (as in plural agent and source agent alternations) or depending on the meanings their predicate carry (as in locative alternation). The type of case marking that is not normally available in main clauses can sometimes be obtained in embedded contexts (i.e., in exceptional case marking and small-clause constructions). In complex predicates, including causative and indirect passive predicates, arguments are case-marked differently from their base clauses by virtue of suffixation, and their case patterns follow the mono-clausal case array, despite the fact that they have multi-clausal structures. Various case marking options are also made available for arguments by grammatical operations. Some processes instantiate a change on the grammatical relations and case marking of arguments with no affixation or embedding. Japanese has the grammatical process of subjectivization, creating extra (non-thematic) major subjects, many of which are identified as instances of ‘possessor raising’ (or argument ascension). There is another type of grammatical process, which reduces the number of arguments by virtue of incorporating a noun into the predicate, as found in the light verb constructions with suru ‘do’ and the complex adjective constructions formed on the negative adjective nai ‘non-existent.’

Article

Malka Rappaport Hovav

Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.

Article

Jim Wood and Neil Myler

The topic “argument structure and morphology” refers to the interaction between the number and nature of the arguments taken by a given predicate on the one hand, and the morphological makeup of that predicate on the other. This domain turns out to be crucial to the study of a number of theoretical issues, including the nature of thematic representations, the proper treatment of irregularity (both morphophonological and morphosemantic), and the very place of morphology in the architecture of the grammar. A recurring question within all existing theoretical approaches is whether word formation should be conceived of as split across two “places” in the grammar, or as taking place in only one.

Article

Anton Karl Ingason and Einar Freyr Sigurðsson

Attributive compounds are words that include two parts, a head and a non-head, both of which include lexical roots, and in which the non-head is interpreted as a modifier of the head. The nature of this modification is sometimes described in terms of a covert relationship R. The nature of R has been the subject of much discussion in the literature, including proposals that a finite and limited number of interpretive options are available for R, as well as analyses in which the interpretation of R is unrestricted and varies with context. The modification relationship between the parts of an attributive compound also contrasts with the interpretation of compounds in other ways because some non-heads in compounds saturate argument positions of the head, others are semantically conjoined with them, and some restrict their domain of interpretation.

Article

William R. Leben

Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score. The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions. An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory. One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments. The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages. Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.

Article

Adina Dragomirescu

Balkan-Romance is represented by Romanian and its historical dialects: Daco-Romanian (broadly known as Romanian), Aromanian, Megleno-Romanian, and Istro-Romanian (see article “Morphological and Syntactic Variation and Change in Romanian” in this encyclopedia). The external history of these varieties is often unclear, given the historical events that took place in the Lower Danubian region: the conquest of this territory by the Roman Empire for a short period and the successive Slavic invasions. Moreover, the earliest preserved writing in Romanian only dates from the 16th century. Between the Roman presence in the Balkans and the first attested text, there is a gap of more than 1,000 years, a period in which Romanian emerged, the dialectal separation took place, and the Slavic influence had effects especially on the lexis of Romanian. In the 16th century, in the earliest old Romanian texts, the language already displayed the main features of modern Romanian: the vowels /ə/ and /ɨ/; the nominative-accusative versus genitive-dative case distinction; analytical case markers, such as the genitive marker al; the functional prepositions a and la; the proclitic genitive-dative marker lui; the suffixal definite article; polydefinite structures; possessive affixes; rich verbal inflection, with both analytic and synthetic forms and with three auxiliaries (‘have’, ‘be’, and ‘want’); the supine, not completely verbalized at the time; two types of infinitives, with the ‘short’ one on a path toward becoming verbal and the ‘long’ one specializing as a noun; null subjects; nonfinite verb forms with lexical subjects; the mechanism for differential object marking and clitic doubling with slightly more vacillating rules than in the present-day language; two types of passives; strict negative concord; the SVO and VSO word orders; adjectives placed mainly in the postnominal position; a rich system of pronominal clitics; prepositions requiring the accusative and the genitive; and a large inventory of subordinating conjunctions introducing complement clauses. Most of these features are also attested in the trans-Danubian varieties (Aromanian, Megleno-Romanian, and Istro-Romanian), which were also strongly influenced by the various languages they have entered in direct contact with: Greek, Albanian, Macedonian, Croatian, and so forth. These source languages have had a major influence in the vocabulary of the trans-Danubian varieties and certain consequences in the shape of their grammatical system. The differences between Daco-Romanian and the trans-Danubian varieties have also resulted from the preservation of archaic features in the latter or from innovations that took place only there.

Article

Franz Rainer

Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking. In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features). Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.

Article

Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.

Article

Case  

Andrej L. Malchukov

Morphological case is conventionally defined as a system of marking of a dependent nominal for the type of relationship they bear to their heads. While most linguists would agree with this definition, in practice it is often a matter of controversy whether a certain marker X counts as case in language L, or how many case values language L features. First, the distinction between morphological cases and case particles/adpositions is fuzzy in a cross-linguistic perspective. Second, the distinctions between cases can be obscured by patterns of case syncretism, leading to different analyses of the underlying system. On the functional side, it is important to distinguish between syntactic (structural), semantic, and “pragmatic” cases, yet these distinctions are not clear-cut either, as syntactic cases historically arise from the two latter sources. Moreover, case paradigms of individual languages usually show a conflation between syntactic, semantic, and pragmatic cases (see the phenomenon of “focal ergativity,” where ergative case is used when the A argument is in focus). The composition of case paradigms can be shown to follow a certain typological pattern, which is captured by case hierarchy, as proposed by Greenberg and Blake, among others. Case hierarchy constrains the way how case systems evolve (or are reduced) across languages and derives from relative markedness and, ultimately, from frequencies of individual cases. The (one-dimensional) case hierarchy is, however, incapable of capturing all recurrent polysemies of individual case markers; rather, such polysemies can be represented through a more complex two-dimensional hierarchy (semantic map), which can also be given a diachronic interpretation.

Article

Jessica Coon and Clint Parker

The phenomenon of case has been studied widely at both the descriptive and theoretical levels. Typological work on morphological case systems has provided a picture of the variability of case cross-linguistically. In particular, languages may differ with respect to whether or not arguments are marked with overt morphological case, the inventory of cases with which they may be marked, and the alignment of case marking (e.g., nominative-accusative vs. ergative-absolutive). In the theoretical realm, not only has morphological case been argued to play a role in multiple syntactic phenomena, but current generative work also debates the role of abstract case (i.e., Case) in the grammar: abstract case features have been proposed to underlie morphological case, and to license nominals in the derivation. The phenomenon of case has been argued to play a role in at least three areas of the syntax reviewed here: (a) agreement, (b) A-movement, and (c) A’-movement. Morphological case has been shown to determine a nominal argument’s eligibility to participate in verbal agreement, and recent work has argued that languages vary as to whether movement to subject position is case-sensitive. As for case-sensitive A’-movement, recent literature on ergative extraction restrictions debates whether this phenomenon should be seen as another instance of “case discrimination” or whether the pattern arises from other properties of ergative languages. Finally, other works discussed here have examined agreement and A’-extraction patterns in languages with no visible case morphology. The presence of patterns and typological gaps—both in languages with overt morphological case and in those without it—lends support to the relevance of abstract case in the syntax.

Article

Mercedes Tubino-Blanco

The Causative/Inchoative alternation involves pairs of verbs, one of which is causative and the other non-causative syntactically and semantically (e.g., John broke the window vs. The window broke). In its causative use, an alternating verb is used transitively and understood as externally caused. When used non-causatively, the verb is intransitive and interpreted as spontaneous. The alternation typically exhibits an affected argument (i.e., a Theme) in both intransitive and transitive uses, whereas the transitive use also involves a Causer that brings about the event. Although they are often volitional agents (e.g., John broke the window with a stone), external causers may also be non-volitional causers (e.g., The earthquake broke the windows) and instruments (e.g., The hammer broke the window). Morphologically, languages exhibit different patterns reflecting the alternation, even intralinguistically. In languages like English, alternations are not morphologically coded, but they are in most languages. Languages like Hindi commonly mark causative (or transitive) alternations by means of different mechanisms, such as internal vowel changes or causative morphology. In many European languages, a subset of alternating verbs may exhibit an uncoded alternation, but most alternating verbs mark anticausativization with a reflexive-like clitic. In Yaqui (Uto-Aztecan), different patterns are associated with different verbal roots. The alternation may be uncoded, equipollent (i.e., both alternating forms are coded), and anticausative. Theoretically, different approaches have explored the alternation. Both lexical and syntactic causativization and anticausativization accounts have been proposed to explain the alternation. A third approach postulates that both forms are derived from a common source.

Article

Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.

Article

Louise Cummings

Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia. Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics. Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.

Article

Clitics can be defined as prosodically defective function words. They can belong to a number of syntactic categories, such as articles, pronouns, prepositions, complementizers, negative adverbs, or auxiliaries. They do not generally belong to open classes, like verbs, nouns, or adjectives. Their prosodically defective character is most often manifested by the absence of stress, which in turn correlates with vowel reduction in those languages that have it independently; sometimes the clitic can be just a consonant or a consonant cluster, with no vowel. This same prosodically defective character forces them to attach either to the word that follows them (proclisis) or to the word that precedes them (enclisis); in some cases they even appear inside a word (mesoclisis or endoclisis). The word to which a clitic attaches is called the host. In some languages (like some dialects of Italian or Catalan) enclitics can surface as stressed, but the presence of stress can be argued to be the result of assignment of stress to the host-clitic complex, not to the clitic itself. One consequence of clitics being prosodically defective is that they cannot be the sole element of an utterance, for instance as an answer to some question; they need to always appear with a host. A useful distinction is that between simple clitics and special clitics. Simple clitics often have a nonclitic variant and appear in the expected syntactic position for nonclitics of their syntactic category. Much more attention has been paid in the literature to special clitics. Special clitics appear in a designated position within the clause or within the noun phrase (or determiner phrase). In several languages certain clitics must appear in second position, within the clause, as in most South Slavic languages, or within the noun phrase, as in Kwakw'ala. The pronominal clitics of Romance languages or Greek must have the verb as a host and appear in a position different from the full noun phrase. A much debated question is whether the position of special clitics is the result of syntactic movement, or whether other factors, morphological or phonological, intervene as well or are the sole motivation for their position. Clitics can also cluster, with some languages allowing only sequences of two clitics, and other languages allowing longer sequences. Here one relevant question is what determines the order of the clitics, with the main avenues of analysis being approaches based on syntactic movement, approaches based on the types of morphosyntactic features each clitic has, and approaches based on templates. An additional issue concerning clitic clusters is the incompatibility between specific clitics when combined and the changes that this incompatibility can provoke in the form of one or more of the clitics. Combinations of identical or nearly identical clitics are often disallowed, and the constraint known as the Person-Case Constraint (PCC) disallows combinations of clitics with a first or second person accusative clitic (a direct object, DO, clitic) and a third person (and sometimes also first or second person) dative clitic (an indirect object, IO, clitic). In all these cases either one of the clitics surfaces with the form of another clitic or one of the clitics does not surface; sometimes there is no possible output. Here again both syntactic and morphological approaches have been proposed.

Article

Modification is a combinatorial semantic operation between a modifier and a modifiee. Take, for example, vegetarian soup: the attributive adjective vegetarian modifies the nominal modifiee soup and thus constrains the range of potential referents of the complex expression to soups that are vegetarian. Similarly, in Ben is preparing a soup in the camper, the adverbial in the camper modifies the preparation by locating it. Notably, modifiers can have fairly drastic effects; in fake stove, the attribute fake induces that the complex expression singles out objects that seem to be stoves, but are not. Intuitively, modifiers contribute additional information that is not explicitly called for by the target the modifier relates to. Speaking in terms of logic, this roughly says that modification is an endotypical operation; that is, it does not change the arity, or logical type, of the modified target constituent. Speaking in terms of syntax, this predicts that modifiers are typically adjuncts and thus do not change the syntactic distribution of their respective target; therefore, modifiers can be easily iterated (see, for instance, spicy vegetarian soup or Ben prepared a soup in the camper yesterday). This initial characterization sets modification apart from other combinatorial operations such as argument satisfaction and quantification: combining a soup with prepare satisfies an argument slot of the verbal head and thus reduces its arity (see, for instance, *prepare a soup a quiche). Quantification as, for example, in the combination of the quantifier every with the noun soup, maps a nominal property onto a quantifying expression with a different distribution (see, for instance, *a every soup). Their comparatively loose connection to their hosts renders modifiers a flexible, though certainly not random, means within combinatorial meaning constitution. The foundational question is how to work their being endotypical into a full-fledged compositional analysis. On the one hand, modifiers can be considered endotypical functors by virtue of their lexical endowment; for instance, vegetarian would be born a higher-ordered function from predicates to predicates. On the other hand, modification can be considered a rule-based operation; for instance, vegetarian would denote a simple predicate from entities to truth-values that receives its modifying endotypical function only by virtue of a separate modification rule. In order to assess this and related controversies empirically, research on modification pays particular attention to interface questions such as the following: how do structural conditions and the modifying function conspire in establishing complex interpretations? What roles do ontological information and fine-grained conceptual knowledge play in the course of concept combination?

Article

Compound and complex predicates—predicates that consist of two or more lexical items and function as the predicate of a single sentence—present an important class of linguistic objects that pertain to an enormously wide range of issues in the interactions of morphology, phonology, syntax, and semantics. Japanese makes extensive use of compounding to expand a single verb into a complex one. These compounding processes range over multiple modules of the grammatical system, thus straddling the borders between morphology, syntax, phonology, and semantics. In terms of degree of phonological integration, two types of compound predicates can be distinguished. In the first type, called tight compound predicates, two elements from the native lexical stratum are tightly fused and inflect as a whole for tense. In this group, Verb-Verb compound verbs such as arai-nagasu [wash-let.flow] ‘to wash away’ and hare-agaru [sky.be.clear-go.up] ‘for the sky to clear up entirely’ are preponderant in numbers and productivity over Noun-Verb compound verbs such as tema-doru [time-take] ‘to take a lot of time (to finish).’ The second type, called loose compound predicates, takes the form of “Noun + Predicate (Verbal Noun [VN] or Adjectival Noun [AN]),” as in post-syntactic compounds like [sinsya : koonyuu] no okyakusama ([new.car : purchase] GEN customers) ‘customer(s) who purchase(d) a new car,’ where the symbol “:” stands for a short phonological break. Remarkably, loose compounding allows combinations of a transitive VN with its agent subject (external argument), as in [Supirubaagu : seisaku] no eiga ([Spielberg : produce] GEN film) ‘a film/films that Spielberg produces/produced’—a pattern that is illegitimate in tight compounds and has in fact been considered universally impossible in the world’s languages in verbal compounding and noun incorporation. In addition to a huge variety of tight and loose compound predicates, Japanese has an additional class of syntactic constructions that as a whole function as complex predicates. Typical examples are the light verb construction, where a clause headed by a VN is followed by the light verb suru ‘do,’ as in Tomodati wa sinsya o koonyuu (sae) sita [friend TOP new.car ACC purchase (even) did] ‘My friend (even) bought a new car’ and the human physical attribute construction, as in Sensei wa aoi me o site-iru [teacher TOP blue eye ACC do-ing] ‘My teacher has blue eyes.’ In these constructions, the nominal phrases immediately preceding the verb suru are semantically characterized as indefinite and non-referential and reject syntactic operations such as movement and deletion. The semantic indefiniteness and syntactic immobility of the NPs involved are also observed with a construction composed of a human subject and the verb aru ‘be,’ as Gakkai ni wa oozei no sankasya ga atta ‘There was a large number of participants at the conference.’ The constellation of such “word-like” properties shared by these compound and complex predicates poses challenging problems for current theories of morphology-syntax-semantics interactions with regard to such topics as lexical integrity, morphological compounding, syntactic incorporation, semantic incorporation, pseudo-incorporation, and indefinite/non-referential NPs.

Article

Jane Chandlee and Jeffrey Heinz

Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.