Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins.
Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.
A fundamental difference in theoretical models of morphology and, particularly, of the syntax–morphology interface is that between endoskeletal and exoskeletal approaches. In the former, more traditional, endoskeletal approaches, open-class lexical items like cat or sing are held to be inherently endowed with a series of formal features that determine the properties of the linguistic expressions in which they appear. In the latter, more recent, exoskeletal approaches, it is rather the morphosyntactic configurations, independently produced by the combination of abstract functional elements, that determine those properties. Lexical items, in this latter approach, are part of the structure but, crucially, do not determine it.
Conceptually, although a correlation is usually made between endoskeletalism and lexicalism/projectionism, on the one hand, and between exoskeletalism and (neo)constructionism, on the other, things are actually more complicated, and some frameworks exist that seem to challenge those correlations, in particular when the difference between word and morpheme is taken into account.
Empirically, the difference between these two approaches to morphology and the morphology-syntax interface comes to light when one examines how each one treats a diversity of word-related phenomena: morphosyntactic category and category shift in derivational processes, inflectional class, nominal properties like mass or count, and verbal properties like agentivity and (a)telicity.
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Scrambling is one of the most widely discussed and prominent factors affecting word order variation in Korean. Scrambling in Korean exhibits various syntactic and semantic properties that cannot be subsumed under the standard A/A'-movement. Clause-external scrambling as well as clause-internal scrambling in Korean show mixed A/A'-effects in a range of tests such as anaphor binding, weak crossover, Condition C, negative polarity item licensing, wh-licensing, and scopal interpretation. VP-internal scrambling, by contrast, is known to be lack of reconstruction effects conforming to the claim that short scrambling is A-movement. Clausal scrambling, on the other hand, shows total reconstructions effects, unlike phrasal scrambling. The diverse properties of Korean scrambling have received extensive attention in the literature. Some studies argue that scrambling is a type of feature-driven A-movement with special reconstruction effects. Others argue that scrambling can be A-movement or A'-movement depending on the landing site. Yet others claim that scrambling is not standard A/A'-movement, but must be treated as cost-free movement with optional reconstruction effects. Each approach, however, faces non-trivial empirical and theoretical challenges, and further study is needed to understand the complex nature of scrambling. As the theory develops in the Minimalist Program, a variety of proposals have also been advanced to capture properties of scrambling without resorting to A/A'-distinctions.
Scrambling in Korean applies optionally but not randomly. It may be blocked due to various factors in syntax and its interfaces in the grammar. At the syntax proper, scrambling obeys general constraints on movement (e.g., island conditions, left branch condition, coordinate structure condition, proper binding condition, ban on string vacuous movement). Various semantic and pragmatic factors (e.g., specificity, presuppositionality, topic, focus) also play a crucial role in acceptability of sentences with scrambling. Moreover, current studies show that certain instances of scrambling are filtered out at the interface due to cyclic Spell-out and linearization, which strengthens the claim that scrambling is not a free option. Data from Korean pose important challenges against base-generation approaches to scrambling, and lend further credence to the view that scrambling is an instance of movement. The exact nature of scrambling in Korean—whether it is cost-free or feature-driven—must be further investigated in future research, however. The research on Korean scrambling leads us to the pursuit of a general theory, which covers obligatory A/A'-movement as well as optional displacement with mixed semantic effects in languages with free word order.
Language is a system that maps meanings to forms, but the mapping is not always one-to-one. Variation means that one meaning corresponds to multiple forms, for example faster ~ more fast. The choice is not uniquely determined by the rules of the language, but is made by the individual at the time of performance (speaking, writing). Such choices abound in human language. They are usually not just a matter of free will, but involve preferences that depend on the context, including the phonological context. Phonological variation is a situation where the choice among expressions is phonologically conditioned, sometimes statistically, sometimes categorically. In this overview, we take a look at three studies of variable vowel harmony in three languages (Finnish, Hungarian, and Tommo So) formulated in three frameworks (Partial Order Optimality Theory, Stochastic Optimality Theory, and Maximum Entropy Grammar). For example, both Finnish and Hungarian have Backness Harmony: vowels must be all [+back] or all [−back] within a single word, with the exception of neutral vowels that are compatible with either. Surprisingly, some stems allow both [+back] and [−back] suffixes in free variation, for example, analyysi-na ~ analyysi-nä ‘analysis-
Phenomena involving the displacement of syntactic units are widespread in human languages. The term displacement refers here to a dependency relation whereby a given syntactic constituent is interpreted simultaneously in two different positions. Only one position is pronounced, in general the hierarchically higher one in the syntactic structure. Consider a wh-question like (1) in English:
(1) Whom did you give the book to <whom>
The phrase containing the interrogative wh-word is located at the beginning of the clause, and this guarantees that the clause is interpreted as a question about this phrase; at the same time, whom is interpreted as part of the argument structure of the verb give (the copy, in <> brackets). In current terms, inspired by minimalist developments in generative syntax, the phrase whom is first merged as (one of) the complement(s) of give (External Merge) and then re-merged (Internal Merge, i.e., movement) in the appropriate position in the left periphery of the clause. This peripheral area of the clause hosts operator-type constituents, among which interrogative ones (yielding the relevant interpretation: for which x, you gave a book to x, for sentence 1). Scope-discourse phenomena—such as, e.g., the raising of a question as in (1), the focalization of one constituent as in TO JOHN I gave the book (not to Mary)—have the effect that an argument of the verb is fronted in the left periphery of the clause rather than filling its clause internal complement position, whence the term displacement. Displacement can be to a position relatively close to the one of first merge (the copy), or else it can be to a position farther away. In the latter case, the relevant dependency becomes more long-distance than in (1), as in (2)a and even more so (2)b:
a Whom did Mary expect [that you would give the book to<whom >]
b Whom do you think [that Mary expected [that you would give the book to <whom >]]
50 years or so of investigation on locality in formal generative syntax have shown that, despite its potentially very distant realization, syntactic displacement is in fact a local process. The audible position in which a moved constituent is pronounced and the position of its copy inside the clause can be far from each other. However, the long-distance dependency is split into steps through iterated applications of short movements, so that any dependency holding between two occurrences of the same constituent is in fact very local. Furthermore, there are syntactic domains that resist movement out of them, traditionally referred to as islands. Locality is a core concept of syntactic computations. Syntactic locality requires that syntactic computations apply within small domains (cyclic domains), possibly in the mentioned iterated way (successive cyclicity), currently rethought of in terms of Phase theory. Furthermore, in the Relativized Minimality tradition, syntactic locality requires that, given X . . . Z . . . Y, the dependency between the relevant constituent in its target position X and its first merge position Y should not be interrupted by any constituent Z which is similar to X in relevant formal features and thus intervenes, blocking the relation between X and Y. Intervention locality has also been shown to allow for an explicit characterization of aspects of children’s linguistic development in their capacity to compute complex object dependencies (also relevant in different impaired populations).
Stergios Chatzikyriakidis and Robin Cooper
Type theory is a regime for classifying objects (including events) into categories called types. It was originally designed in order to overcome problems relating to the foundations of mathematics relating to Russell’s paradox. It has made an immense contribution to the study of logic and computer science and has also played a central role in formal semantics for natural languages since the initial work of Richard Montague building on the typed λ-calculus. More recently, type theories following in the tradition created by Per Martin-Löf have presented an important alternative to Montague’s type theory for semantic analysis. These more modern type theories yield a rich collection of types which take on a role of representing semantic content rather than simply structuring the universe in order to avoid paradoxes.
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.
The noun-modifying clause construction (NMCC) in Japanese is a complex noun phrase in which a prenominal clause is dependent on the head noun. Naturally occurring instances of the construction demonstrate that a single structure, schematized as [[… predicate (finite/adnominal)] Noun], represents a wide range of semantic relations between the head noun and the dependent clause, encompassing some that would be expressed by structurally distinct constructions such as relative clauses, noun complement clauses, and other types of complex noun phrases in other languages, such as English. In that way, the Japanese NMCC demonstrates a clear case of the general noun-modifying construction (GNMCC), that is, an NMCC that has structural uniformity across interpretations that extend beyond the range of relative clauses.
One of the notable properties of the Japanese NMCC is that the modifying clause may consist only of the predicate, reflecting the fact that referential density is moderate in Japanese—arguments of a predicate are not required to be overtly expressed either in the main clause or in the modifying clause. Another property of the Japanese NMCC is that there is no explicit marking in the construction that indicates the grammatical or semantic relation between the head noun and the modifying clause. The two major constituents are simply juxtaposed to each other.
Successful construal of the intended interpretations of instances of such a construction, in the absence of explicit markings, likely relies on an aggregate of structural, semantic, and pragmatic factors, including the semantic content of the linguistic elements, verb valence information, and the interpreter’s real-world knowledge, in addition to the basic structural information.
Researchers with different theoretical approaches have studied Japanese NMCCs or subsets thereof. Syntactic approaches, inspired by generative grammar, have focused mostly on relative clauses and aimed to identify universally recognized syntactic principles. Studies that take the descriptive approach have focused on detailed descriptions and the classification of a wide spectrum of naturally occurring instances of the construction in Japanese. The third and most recent group of studies has emphasized the importance of semantics and pragmatics in accounting for a wide variety of naturally occurring instances.
The examination of Japanese NMCCs provides information about the nature of clausal noun modification and affords insights into languages beyond Japanese, as similar phenomena have reportedly been observed crosslinguistically to varying degrees.
This paper provides an overview of polarity phenomena in human languages. There are three prominent paradigms of polarity items: negative polarity items (NPIs), positive polarity items (PPIs), and free choice items (FCIs). What they all have in common is that they have limited distribution: they cannot occur just anywhere, but only inside the scope of licenser, which is negation and more broadly a nonveridical licenser, PPIs, conversely, must appear outside the scope of negation. The need to be in the scope of a licenser creates a semantic and syntactic dependency, as the polarity item must be c-commanded by the licenser at some syntactic level. Polarity, therefore, is a true interface phenomenon and raises the question of well-formedness that depends on both semantics and syntax.
Nonveridical polarity contexts can be negative, but also non-monotonic such as modal contexts, questions, other non-assertive contexts (imperatives, subjunctives), generic and habitual sentences, and disjunction. Some NPIs and FCIs appear freely in these contexts in many languages, and some NPIs prefer negative contexts. Within negative licensers, we make a distinction between classically and minimally negative contexts. There are no NPIs that appear only in minimally negative contexts.
The distributions of NPIs and FCIs crosslinguistically can be understood in terms of general patterns, and there are individual differences due largely to the lexical semantic content of the polarity item paradigms. Three general patterns can be identified as possible lexical sources of polarity. The first is the presence of a dependent variable in the polarity item—a property characterizing NPIs and FCIs in many languages, including Greek, Mandarin, and Korean. Secondly, the polarity item may be scalar: English any and FCIs can be scalar, but Greek, Korean, and Mandarin NPIs are not. Finally, it has been proposed that NPIs can be exhaustive, but exhaustivity is hard to precisely identify in a non-stipulative way, and does not characterize all NPIs. NPIs that are not exhaustive tend to be referentially vague, which means that the speaker uses them only if she is unable to identify a specific referent for them.