Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Linguistics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 30 March 2023

Syntax–Phonology Interfacefree

Syntax–Phonology Interfacefree

  • Sónia FrotaSónia FrotaDepartment of Linguistics, University of Lisbon
  •  and Marina VigárioMarina VigárioPhonetics Laboratory & Lisbon Baby Lab, University of Lisbon

Summary

The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.

Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.

The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.

As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.

Subjects

  • Linguistic Theories
  • Phonetics/Phonology
  • Syntax

1. Defining the Syntax–Phonology Interface

The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other. This has led to diverse views on the nature of phonological representations and how they are constructed.

Scholars tend to agree that there are phonological phenomena that refer to domains, directly or indirectly, partially defined by syntax. There is also agreement on the fact that not all syntactic information is relevant for phonological structure or phonological computation. It is generally acknowledged that speech is organized into chunks or constituents of varying size (Beckman, 1996; Shattuck-Hufnagel & Turk, 1996; Wagner & Watson, 2010; Frota, 2012; Gussenhoven, 2015). This organization is known as prosodic structure, and prosodic constituents have been reported to signal units of information or sense units, as well as establishing the domains for phonological phenomena (Selkirk, 1984; Nespor & Vogel, 1986; Jun, 1998; Truckenbrodt, 1999; Frota, 2000). Prosodic structure has been shown to be used in word segmentation and in the computation of syntactic structure in language processing (Cutler, Dahan, & Donselaar, 1997; Guasti et al., 2000; Langus et al., 2012). Similarly, prosodic structure also plays a role in language development, providing fundamental cues for learning the lexicon and helping to bootstrap the acquisition of syntax (Morgan & Demuth, 1996; Christophe et al., 2008).

However, there is presently no consensus on many central issues of the syntax–phonology interface. This is an area of diverging opinions, with a wide range of theoretical diversity. The following fundamental issues are not yet resolved.

(i)

Does the phonological organization of speech directly reflect syntactic structure, or is the effect of syntax on phonology mediated by a phonological (prosodic) representation that is related to, but distinct from, syntactic structure?

(ii)

If there is a prosodic representation proper, how does it relate, or map, onto syntax?

(iii)

What aspects of syntax are relevant to phonology, and what kind of non-syntactic factors contribute to determine the phonological organization of speech?

(iv)

What are the design features of the phonological organization of speech (e.g., how many constituents, how they relate to each other, whether different structures are needed)?

(v)

What types of phenomena are sensitive to prosodic structure?

(vi)

Is the interaction between syntax and phonology asymmetrical, with syntax affecting phonology but phonology not affecting syntax, or is the influence bidirectional, with phonology also impacting on syntax?

This article focuses on the questions above. Section 2 examines (i) and (ii), Section 3 (iii) and (iv), Section 4 (v), and Section 5 (vi).

2. Syntax–Phonology Interface: Direct and Indirect Reference Approaches

A fundamental question of the syntax–phonology interface is whether phonological phenomena refer directly to syntax or indirectly by referring to prosodic representations partially built from syntax (Inkelas & Zec, 1990; Elordieta, 2008; Wagner & Watson, 2010). There are two main groups of theories of the syntax–phonology interface: Direct Reference Theories and Indirect Reference Theories or Prosodic Hierarchy Theories. At one extreme, Direct Reference Theories (DRT) claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. At the other extreme, Prosodic Hierarchy Theories (PHT) propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under the prosodic view, phonological phenomena refer to the phonological constituents defined in prosodic structure on the basis of limited syntactic information, together with phonological factors.

2.1 Direct Reference Approaches

In early work in phonetics and psycholinguistics, phonological and phonetic phenomena were taken to directly reflect syntactic structure (Lehiste, 1973; Cooper & Sorensen, 1977; Cooper & Paccia-Cooper, 1980). For example, intonational contours, or lengthening, were analyzed as signaling syntactic boundaries. In models of categorical grammar (Steedman, 1991), there is a matching between syntactic structure and prosodic structure where the former corresponds to the latter. In these cases, either there is no level of phonological representation or this level is isomorphic to syntactic representation.

However, the main arguments in favor of direct reference to syntax are phonological phenomena, identified in several languages, that are described as referring to specific syntactic information (e.g., various syntactic categories or syntactic configurations). Crucially, the syntactic properties needed to account for such phenomena are too varied and too particular to be captured by generalizations over syntactic categories and configurations, like lexical head or lexical phrase. An illustrative case is provided by a set of phenomena in the phonology of Kimatuumbi, described by Odden (1987, 1990). The phonological processes of shortening, phrasal tone insertion, initial tone insertion, lengthening, and glide formation are conditioned by an array of different kinds of syntactic properties (e.g., heads, phrasal nodes in particular configurations, initial or final edges of particular types), as well as lexical properties (e.g., class, lexical category of the elements affected).

Kaisse (1985) is one of the main proponents of DRT. Kaisse’s proposal is based on external sandhi phenomena that show domains of application conditioned by syntactic information, in particular by syntactic organization, like the c-command relation, and by particular syntactic category labels. Lenition in Gilyak illustrates the former case. In this language, word-initial obstruents are voiced after nasals and spirantized after vowels, but only if the target of lenition is in a word that c-commands the preceding word. Vowel deletion in Modern Greek could illustrate the latter case, in the account provided in Kaisse (1977), as different rules are described to apply in V-PP and V-Adv combinations. However, as noted in Kaisse (1990), a prosodic hierarchy account of the Greek sandhi phenomena is possible through a generalization in broad syntactic and prosodic terms where heads and phrases play a crucial role in establishing the prosodic domains for vowel sandhi (Condoravdi, 1990).

At least some of the phenomena supporting DRT can be (re)analyzed within Hayes’s (1990) theory of Precompiled Phrasal Phonology. The behavior of the phenomena described in the paragraphs above resembles that of phenomena that apply at the word level and below to specific morphemes or words. An example of variation in the form of morphemes that do not strictly follow from phonological factors is provided by the alternation in the Spanish feminine definite article, which appears as [el] or [la]. The first allomorph is conditioned by certain nouns that start with a stressed [a], whereas the second appears elsewhere, including other syntactic categories beginning in stressed [a]. Allomorphy involving verb and clitic clusters in several Romance languages has similar properties, being sensitive to a preceding verb, in some cases to the type of verbal inflection, as well as to phonological properties (Peperkamp, 1997, for Italian; Vigário, 2003, for European Portuguese). These morphophonological alternations can be viewed as different allomorphs listed in the lexicon. Hayes’s proposal is that phrasal phonological phenomena that refer to specific syntactic information are treated in a parallel way, as phrasal allomorphs. The lexicon includes phrasal allomorphs and the phonological instantiation frames where they are inserted. These frames specify the conditioning environment of the allomorph, which is partly syntactic. For example, the phonological process of shortening in Kimatuumbi, which was described above as supporting the DRT, is instead a case of precompiled phrasal phonology according to Hayes.

More recently, other models have been proposed that are close to DRT in the sense that they argue for a direct influence of syntax in the application of phonological processes (Seidl, 2001; Dobashy, 2003; Ishihara, 2003, 2007; Kahnemuyipour, 2004; Kratzer & Selkirk, 2007; Pak, 2008; Samuels, 2009, 2010). In the model of Seidl (2001), there are two levels of representation mapped from (morpho)syntactic structure: one that is defined by syntactic phases (Chomsky, 2001), and another that is defined by theta-domains, or domains where theta-roles are assigned. There are also two types of phonological processes, depending on the level of representation where they apply. The first type, called early rules, operates at the first level of representation, and thus has access to all syntactic information. The second type, called late rules, operates at the second level of representation, a level that is not purely syntactic and where prosodic domains are projected from theta-domains. This dichotomy is reminiscent of Kaisse’s (1985) segregation of P1-rules, which are defined as those primarily sensitive to syntactic information. However, the phonological process of shortening in Kimatuumbi, for example, that is seen as a P1-rule in Kaisse’s terms (and in the DRT view), and a precompiled phrasal rule in Hayes (1990), is analyzed by Seidl as a late rule, operating at the level of representation projected from theta-domains.

Other proposals (Dobashy, 2003; Ishihara, 2003, 2007; Kahnemuyipour, 2004; Kratzer & Selkirk, 2007; Pak, 2008; Samuels, 2009, 2010) have developed the idea that phonological constituents are mapped from syntactic phases, through a process known as Multiple Spell-out (cf. Uriagereka, 1999; Chomsky, 2001). In Chomsky’s minimalistic theory of syntax, CP and vP correspond to two phase domains, and thus to phonological phrases. Spell-out domains, namely VP and IP/TP, have also been argued to be mapped as phonological phrases (Dobashy, 2003; Ishihara, 2007). The assumption that phases and spell-out domains are essentially the same has led to the theory of Phonological Derivation by Phase (Samuels, 2009, 2010), according to which spell-out domains are the only domains required by phonology. Other spell-out domains may also correspond to prosodic constituents (Kratzer & Selkirk, 2007; Pak, 2008): for example, the Intonational Phrase may correspond to the spell-out of Comma Phrase, and the Prosodic Word can be understood as corresponding to the spell-out domain of lexical heads.

2.2 Indirect Reference Approaches

Since Chomsky and Halle (1968), it has been acknowledged that syntactic constituency and phonological constituency are not isomorphic (Selkirk, 1981; Rotenberg, 1978; Napoli & Nespor, 1979). In other words, phonological processes at the phrasal level operate in domains that may not coincide with syntactic domains. This observation, together with the observation that many phonological processes do not require direct access to morphosyntactic information, led to the development of Prosodic Hierarchy Theories (PHT). A central feature of PHT is the proposal of a phonological (prosodic) representation (i) that is mapped from morphosyntactic structure, using limited syntactic information, (ii) that is composed of constituents non-isomorphic with syntactic constituents, and (iii) that displays a hierarchical organization distinct from that found in syntactic structure (e.g., Selkirk, 1984, 1986, 1995; Nespor & Vogel, 1986; Truckenbrodt, 1995, 1999; Ito & Mester, 2009). Therefore, the effect of syntax on phonology is indirect, and confined to the relation between syntactic and prosodic structure. There are two main approaches in PHT: the Relation-Based approach and the Edge-Based approach. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. A third approach, developed more recently, is the Match Theory. The kind of syntactic information used in the computation of prosodic structure distinguishes between the different approaches.

2.2.1 Relation-Based Approach

The Relation-based approach of PHT was developed mainly in work by Nespor and Vogel (1982, 1986) and Hayes (1989). Within this approach, the mapping between syntax and phonology refers to the following types of syntactic information: lexical heads, syntactic terminal nodes, lexical phrases (maximal projections of lexical heads), clausal and sentential nodes, relation between heads and complements, relation between heads and modifiers, difference between complements or arguments and modifiers, and direction of branching or (non)recursive side (head-initial, head-final). Relation-based mapping conditions therefore make crucial reference to general syntactic structural relations. This can be illustrated by the relation-based algorithm that leads to the formation of phonological phrases in prosodic structure. The notions of lexical head, lexical phrase, (non)recursive side, complement, and branchingness play a role so that a resulting range of possibilities are predicted: (i) the head and all material on the nonrecursive side inside the lexical phrase domain (as shown by word initial voicing assimilation in Quechua; Nespor & Vogel, 1986); (ii) the head, the material on the nonrecursive side and the first complement of the head (as in the case of vowel shortening in Chimwiini; Nespor & Vogel, 1986; Hayes, 1989); (iii) the head, the material on the nonrecursive side, and the first complement if it is not branching (examples are provided by sandhi and rhythmic phenomena in Italian, or by rhythmic phenomena and pitch accent distribution in European Portuguese; Nespor & Vogel, 1986; Frota, 2000). Thus, in some languages head and complement belong to the same phonological phrase irrespective of branchingness or size of the complement, whereas in others only nonbranching complements are grouped together with heads. The sensitivity to the relative length of complements is seen as a general tendency in the phonology of some languages to avoid short phonological phrases. Additional nonsyntactic factors may play a role in determining phonological phrases, such as speech rate. In languages where the phrasing of the head and complement in the same phonological phrase is optional, this phrasing is more frequently obtained in fast speech than in slow speech (Nespor & Vogel, 1986). Although it has been claimed that phonological phrases and other smaller prosodic domains, unlike larger domains, are not affected by nonsyntactic factors and are thus more closely related to morphosyntactic information (e.g., Kaisse, 1990; Pak, 2005), nonsyntactic factors seem to play a role in determining phonological constituency at the different levels of phrasing (see section 3.1).

Prosodic structure mediates between syntax and phonology, and plays a role in accounting for diverse kinds of phonological phenomena, such as rhythm and intonation (Nespor, 1990; Ladd, 1996). Phonological processes in many languages have been analyzed following the relation-based approach, and for some languages a full account of the prosodic representation of the language has been provided (e.g., Hayes & Lahiri, 1991, for Bengali; Frota, 2000, 2014, for European Portuguese; Tenani, 2002, for Brazilian Portuguese).

2.2.2 Edge-Based Approach

The Edge-based approach of PHT is grounded in the observation that the mapping between syntax and phonology refers only to the edges of designated types of syntactic constituents, namely lexical syntactic heads or syntactic phrases (Selkirk, 1986, 1990; Chen, 1987; Selkirk & Shen, 1990). Within this approach, the crucial syntactic information for the mapping is thus limited to a correspondence between the right or left edges of given syntactic category types in syntactic structure and the right or left edges of given prosodic category types in phonological structure. For example, the formation of phonological phrases is obtained either by a right edge setting, where the right edge of a syntactic phrase coincides with the right edge of the phonological phrase, or by a left edge setting, where the left edge of a syntactic phrase coincides with the left edge of a phonological phrase. These settings are, in principle, independent of the language being head-initial or head-final (Selkirk, 1986; Inkelas & Zec, 1995). Illustrative examples are Xiamen and Ewe, which are described as having similar superficial syntactic structure properties, but different phrasal edge settings. The facts of vowel shortening in Chimwiini, described above within the relation-based approach, have been used to motivate edge-base mapping. The right edge setting derives the prosodic domain of these processes as a phonological phrase that includes the lexical head and a following complement, without referring to syntactic structural relations like head/complement. Thus, the edge-based approach has been argued to be a more restrictive theory of the syntax–phonology mapping, since it excludes syntactic structural information from playing a role in the syntax–phonology interface or in phonology.

The edge-based approach of PHT has been developed within Optimality Theory primarily in terms of Alignment constraints which call for the right/left edge of a given syntactic constituent to align with the right/left edge of a given prosodic constituent (Selkirk, 1995). The sensitivity of phonological structure to branchingness and to focus in some languages has led to proposals of alignment that refer to edges of particular types of constituents, such as branching constituents or focus-marked constituents, thus widening the kind of syntactic information available to mapping constraints (Kubozono, 1993; Truckenbrodt, 1995). Another type of syntax–phonology interface constraints consists of Wrap constraints, which call for a given syntactic constituent to be contained within a given prosodic constituent (Truckenbrodt, 1995, 1999). For example, vowel lengthening in Chicheŵa occurs at the end of a phonological phrase that corresponds to a VP, including the verb and its complements. This is explained by a Wrap constraint on syntactic phrases that is ranked higher than Alignment. The Align/Wrap account of PHT thus combines both demarcational constraints (edge-based) and grouping constraints (cohesional-based). The latter are closer to relation-based views.

As in the relation-based approach, edge-based PHT also ascribe a role to nonsyntactic factors in determining phonological structure, at different levels of phrasing (e.g., Selkirk & Tateishi, 1988; Ghini, 1993; Selkirk, 2000; Elordieta, Frota, & Vigário, 2005). These have been mostly expressed by constraints on the size of phonological constituents, and speech rate effects, which produce phonological structure where it would not be called for by (the relevant) syntactic information (see section 3.1).

2.2.3 Match Theory

Match Theory is the most recent development within PHT (Selkirk, 2011; Selkirk & Lee, 2015a). As in the previous approaches, the effect of syntactic structure on phonology is indirect, given that the constituents that are referred to by phrasal-sensitive phonological phenomena are phonological constituents. Therefore, the effect of syntax is restricted to the relation between syntactic structure and phonological (prosodic) structure. However, unlike previous approaches, in Match Theory there is a grammatically defined correspondence relation between syntactic structure and prosodic structure so that designated syntactic constituent types coincide with designated constituent types of prosodic structure. This entails that, in the default case, prosodic structure matches, or is isomorphic, with syntactic structure and thus displays a recursive organization (see section 3.3). The precise definition of the designated syntactic constituent types that are faithfully reflected in prosodic constituency continues to be a matter of debate (Selkirk, 2011; Selkirk & Lee, 2015a; Dobashy, 2016).

It is assumed that Spell-out domains of phases or constituent types defined in syntactic phase theory correspond to phonological constituents. The minimal set of syntactic constituent types proposed includes the notions of clause, syntactic phrase, and word, which respectively correspond to the prosodic constituents intonational phrase, phonological phrase, and prosodic word. Two notions of clause have been proposed to be relevant: a syntactic constituent of the clause type, namely the complement of complementizer, and the constituent that defines a speech act, also called the Illocutionary Clause, or Force Phrase, or Comma Phrase (Rizzi, 1997; Potts, 2005; Selkirk, 2005, 2011; Truckenbrodt, 2015). In yet another version, the syntactic clause is defined in a more flexible way as the highest projection in the root clause that is headed by overt verbal material, together with its specifier (Hamlaoui & Szendroi, 2015). How to define the type of syntactic phrase that is relevant to phonology is also an issue that remains to be resolved. The appeal to the notion XP suggests that the distinction between lexical and functional categories may not be relevant to Match theory, unlike in other PHT approaches, since both lexical and functional phrase projections may be considered in syntax–phonology correspondence constraints. Under the syntactic phase theory, the notion of “complement of v” (the counterpart, at the phrasal level, of complement of complementizer) has been proposed. However, both notions do not seem to capture the relevant phonological phrase structures (Selkirk, 2011). Finally, the syntactic information that defines the matching domain for prosodic words is unclear and calls for further research.

Although Match Theory essentially predicts that prosodic constituency is a reflection of syntactic constituency, mismatches between syntactic and prosodic phrasing may arise as a consequence of phonological factors such as size constraints or speech rate effects. Moreover, it may also be the case that prosodic structure, besides sharing some properties with syntactic structure, has properties that differentiate it from syntactic representations. Among the latter are configurational properties like prosodic sisterhood (sister nodes are of the same prosodic category; Ladd, 1996; Myrberg, 2013), or the requirement that there are no empty phonological constituents (Nespor & Vogel, 1986; Elfner, 2012). Most of these phonological constraints, though expressed using different terms, are common to all PHT approaches. Within Match Theory, they provide the essential argument for an independent prosodic constituent structure.

2.3 Current Research

Some research comparing different approaches to the syntax–phonology interface has been produced, but detailed empirical comparisons and discussions within and across languages are still missing. Studies that have compared relation-based and end-based approaches often arrive at opposite conclusions (e.g., Bickmore, 1990 on Kinyambo; Cho, 1990 on Korean). It has been suggested that some languages are more amenable to one or the other approach, and that languages may also combine both, with some prosodic constituents being more end-based and others more relation-based (Chen, 1990; Inkelas & Zec, 1995; Ghini, 1993). Such a scenario could be a reflection of demarcation and grouping as driving forces of the organization of speech into constituents. Studies comparing the edge-based approach and the Match Theory have also reached contradictory conclusions (e.g., Cheng & Downing, 2012; Truckenbrodt & Féry, 2015). As to the DRT and the PHT, it has been proposed that it might be the case that there are two strategies available for mapping phonological structure from syntactic structure (Elordieta, 2008). Why different possibilities may be at play, across and/or within languages, is yet to be understood. Most past and current research has focused on a restricted set of phonological processes in a limited set of languages, and on a particular level of representation/constituent type for a given language. Few studies have examined the full range of phonological representations in a given language. Even rarer is the attempt to simultaneously consider different types of phonological phenomena to test proposals of syntax–phonology mapping (see section 4). Continued research on these issues is needed to find principled ways of explaining the syntax–phonology interface at all relevant levels within and across languages.

3. Phonological Representations

How phonological (prosodic) representations are constructed and what is the set of possible and attested prosodic structures in language are central questions in research on the syntax–phonology interface and prosodic phonology. Answers to these questions vary according to theories of the syntax–phonology interface. Even so, some shared aspects can be found which suggest a common ground across PHT.

3.1 Phonological Constituents: How They Are Constructed

There is general agreement that the structure of phonological representation at the word level and above is a hierarchy of phonological constituents that results from the interaction of a limited set of morphosyntactic information with phonological principles or constraints (e.g., Nespor & Vogel, 1986; Ghini, 1993; Selkirk, 2000, 2011). The syntactic basis of prosodic constituents consists mostly of generalizations over morphosyntactic categories and structures, so that a prosodic constituent may not be built with reference to specific categories of words or phrases. However, the precise syntactic information that is relevant to phonology depends on the particular theory of the syntax–phonology interface and (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping (see section 2.2). Phonological principles or constraints form the phonological basis of prosodic constituents. Across PHT, eurythmic conditions have been acknowledged, including constituent size/length/weight, balance/symmetry/uniformity and particular directions of asymmetries, as well as speech rate. Many of these factors have been shown to contribute to prosodic constituency both at the word and phrasal levels. For example, size restrictions determine that a given prosodic constituent contains a minimum, a maximum, or an exact number of constituents of a lower level (Nespor & Vogel, 1986; Ghini, 1993; Selkirk, 2000, 2011; Ito & Mester, 2003). A common size restriction is prosodic binarity, shown to be relevant at the foot, prosodic word, and prosodic phrasal levels in several languages. Prosodic binarity requires that a constituent is structurally binary. Size restrictions expressed in number of syllables, which may not be accounted for by prosodic binarity, have also been described in various languages (Delais-Roussarie, 1996; Jun, 2003; Elordieta, Frota, & Vigário, 2005; Frota, 2014). Preferences in terms of relative size/weight determine that prosodic constituents of similar size are formed (Nespor & Vogel, 1986; Ghini, 1993; Frota, 2000; Sândalo & Truckenbrodt, 2002; D’Imperio et al., 2005). An alternative to similar size constituents are phonologically constrained asymmetries. The most common one determines that constituents in prominent positions, i.e., heads, are structurally complex or longer/heavier than non-heads (Dresher & Van der Hulst, 1998; Ghini, 1993; Frota, 2000; Prieto, 2005). Besides head-dependent asymmetries, edge-dependent asymmetries have recently been proposed, as the requirement that a prosodic constituent starts with a strong constituent, that is one which is not lower in prosodic structure than the constituent that immediately follows (Selkirk, 2011). Speech rate also impacts on prosodic constituency: increasing speech rate promotes larger constituents, so that size and balance restrictions may depend on tempo (Nespor & Vogel, 1986; Ghini, 1993; Prieto, 2005).

The phonological basis of prosodic constituents has been implemented within Optimality Theory as phonological markedness constraints that establish the well-formedness of a given prosodic constituent (Selkirk, 2000, 2011). These constraints are proposed to interact with the morphosyntax–phonology mapping constraints, which express the syntactic basis of prosodic constituents, to produce phonological constituency. Importantly, phonological constraints play a crucial role in determining higher (e.g., intonational phrase, phonological phrase) and lower (e.g., prosodic word) phonological constituents.

3.2 Phonological Constituents: How Many Are There?

A major research question has been determining how many prosodic category types there are in the prosodic hierarchy. Proposals have differed according to theories of the syntax–phonology mapping and theories of the structural relations between constituents of prosodic structure (see section 3.3). However, three basic prosodic categories relevant to the syntax–phonology interface are commonly defined: the prosodic word (PW), the phonological phrase (PhP), and the intonational phrase (IP). Each of these prosodic categories bears a relation to a syntactic constituent type, respectively a word-like (lexical) morphosyntactic unit, a phrase-like syntactic unit, and a clause-like syntactic unit (Nespor & Vogel, 1986; Truckenbrodt, 1995; Selkirk, 2011). Beyond these, the repertoire of possible prosodic constituent types has varied considerably across theories, languages, and studies. By and large, the variation results from the addition of word-like or phrase-like categories that respectively fill up the space between PW and PhP and between PhP and IP. A simplified picture of this variation is provided in Table 1.

Table 1. Prosodic Constituent Types and Corresponding Morphosyntactic Constituent Types

Prosodic Constituent Types

Corresponding Morphosyntactic Constituent Types

Intonational Phrase (IP) (Nespor & Vogel, 1986; Selkirk, 2011; Ito & Mester, 2013)

Root sentence, clause

Phonological Phrase (PhP) (Nespor & Vogel, 1986; Truckenbrodt, 1999)

Major Phrase (Selkirk, 2000, 2005)

Accentual Phrase (Jun, 1996; Delais-Roussarie et al., 2015)

XP, maximal projection of lexical categories

Minor Phrase (Kubozono, 1993; Selkirk, 2005)

Syntactically branching constituent

Accentual Phrase (Jun, 2005)

Clitic Group (Nespor & Vogel, 1986; Hayes, 1990)

Minimal Phrase (Condoravdi, 1990)

Composite Group (Vogel, 2009)

Prosodic Word Group (Vigário, 2010)

Recursive Prosodic Word (Kabak & Revithiadou, 2009)

Prosodic Word (PW) (Nespor & Vogel, 1986; Downing, 1999)

Syntactic terminal node, lexical word (includes certain types of compounds, may include postpositions, clitics, etc)

Prosodic Word (PW) (Nespor & Vogel, 1986; Peperkamp, 1997; Hall, 1999; Vigário, 2003)

Prosodic stem (Downing, 1999)

Stem plus affixes (also nontransparent compounds, some affixes in certain phonological conditions)

The definition of possible phonological constituents goes hand in hand with the definition of the properties of prosodic structure. If recursive structures are generally assumed (Ito & Mester, 2009, 2012, 2013; Elfner, 2012), different layers of recursion of the same basic prosodic category (e.g., PhP) may form different prosodic constituents (e.g., maximal PhP, minimal PhP, non-minimal PhP). What is the set of prosodic category types within and across languages remains an open empirical question.

3.3 On Recursion

An answer to the question about how many prosodic category types are possible crucially depends on another central research theme in phrasal phonology, namely to what extent phonological representations have formal properties distinct from syntactic representations. At one extreme, proposals have been developed that posit that phonology and syntax are inherently distinct in that phonological representations consist of strings whereas syntactic representations consist of trees (Chomsky & Halle, 1968; Selkirk, 1981, 1974; Idsardi, 1992; Neeleman & Koot, 2006). Generative phonology, unlike generative syntax, started as a theory of strings structured through boundary symbols (Chomsky & Halle, 1968). Recursion naturally follows as a property of tree-based representations, but not of strings. Thus, syntactic representations are recursive, whereas phonology lacks generalized recursion in the syntactic sense (Neeleman & Koot, 2006). Boundaries have also been appealing for accounts of speech planning, which favor a view of prosodic phrasing based on processing requirements instead of grammatical mappings (Watson & Gibson, 2004). Most phonological theories, however, have departed from string representations to propose hierarchical structures: “we speak trees, not strings” (Pierrehumbert & Beckman, 1988 p. 160). Prosodic Hierarchy Theories (PHT) are tree-based, but whether recursion is allowed or seen as a natural property of phonology is a matter of contention.

The idea that syntax and phonology are related but phonological representations have formal properties that distinguish them from syntactic representations in fundamental ways has been developed both by relation-based and edge-based PHT (see section 2.2). These properties were first known as the Strict Layer Hypothesis, which enforces strict layered prosodic trees (Selkirk, 1984; Nespor & Vogel, 1986). Thus, recursion is ruled out, and in particular recursive representations that would mirror syntax, like (i) domination relations involving a constituent B that is dominated by and contained within A, and that also dominates a constituent A, as illustrated in (1a), or (ii) sisterhood relations involving nodes of different categories, as in (1b–c). A reformulation of the Strict Layer Hypothesis replaced an all-in-one restriction on prosodic representations by a set of constraints (Selkirk, 1995). These domination constraints included Layeredness (which rules out the configuration in 1a), Exhaustivity and NoLevelSkipping (which ban the configurations in 1b–c), NonRecursitivity (which penalizes a node dominating another of the same category, as in 1c–d), and Headedness (which captures the hierarchical organization whereby a constituent must dominate a constituent of the immediately lower level, as in 1e).

(1)

In this view, recursion is highly marked and only arises due to pressures on phonological representations to fulfill other requirements. Even models that make extensive use of recursive constituents, as in Ito and Mester (2009, 2012), generate these structures by violations of constraints against recursion. Two types of recursion may be distinguished: unbalanced and balanced recursion (Van der Hulst, 2010). Forms of bounded recursion, usually restricted to a limited set of structures, have been acknowledged and usually illustrate the first type, depicted in (1c). This is the case for the prosodic phrasing of affixes and clitics with words in many languages, which involves adjunct-like representations and thus level skipping (e.g., Booij, 1996; Peperkamp, 1997; Vigário, 2003). Such phrasing may be understood as a sort of last-resort configuration arguably due to phonological markedness constraints related to well-formed PW, which require stress. A different case is that of balanced recursion (Ladd, 1996; Frota, 2000), a configuration where a prosodic node dominates nodes of the same category type, illustrated in (1d). This has been recently captured by the prosodic sisterhood constraint EqualSisters (Myrberg, 2013). Unlike unbalanced recursion, balanced recursion has no analogue in syntactic structure.

Match Theory is at the other extreme of the spectrum by proposing not only that phonology is tree-based, but that phonological representations are recursive by default like syntactic representations, given the inherent relation between prosodic categories and syntactic categories (Selkirk, 2011; Selkirk & Lee, 2015a). Departures from syntactic-like recursion need to be explained by phonological constraints on domination and sisterhood relations between constituents of the prosodic hierarchy.

A consequence of the lack of agreement on the formal properties of possible prosodic structures is the difficulty in defining phonological constituents. Theories of constituency that posit generalized recursion tend to equate different levels of prosodic phrasing with different constituents that function as prosodic domains for particular phonological processes in a given language (Ito & Mester, 2009, 2012; Selkirk, 2011; Elfner, 2012; Elordieta, 2015). For example, Ito and Mester (2012) suggest that in Japanese the minimal PhP is the domain of accentual culminativity, as there can be at most one accent per minimal PhP, unlike in the other levels of the same constituent, that is, non-minimal PhP or maximal PhP. Differently, theories of constituency that make a restricted use of recursion tend to differentiate levels of constituency from phrasing levels, as only the former are domains for categorical phonological processes (Downing, 1999; Frota, 2000, 2012; Vigário, 2010; Downing & Kadenge, 2015). If two phonological processes refer to two different domains, then two prosodic category types are involved (for example, PW and a higher-level constituent), and not recursion-based constituents defined as the higher and lower levels of the same prosodic type (e.g., minimal and maximal PW).

4. Evidence

The primary source of evidence for prosodic structure consists of phonological phenomena whose distribution is a function of the organization of speech into prosodic constituents. These phenomena are described solely on the basis of phonological information, as they involve sound patterns that are sensitive to properties of phonological representation but not to morphosyntactic information (unlike in the case of most of the processes mentioned in section 2.1).

4.1 Types of Prosodic Structure-Sensitive Phenomena

There is a wide range of phenomena that have been shown to appeal to prosodic constituent structure (e.g., Nespor & Vogel, 1986; Inkelas & Zec, 1990; Hayes & Lahiri, 1991; Frota, 2000, 2012; Horne & Oostendorp, 2005; Frota & Prieto, 2007; Árnason, 2009; Vigário, 2010; Borowsky et al., 2012; Selkirk & Lee, 2015b). They are characterized as sensitive to size/length/weight and eurythmic factors as well as speech rate, and thus reflect the phonological basis of prosodic constituency. Several phenomena may converge to refer to a particular prosodic constituent in a given language, and different constituents are distinguished as the domain of different (sets of) phenomena.

Prosodic structure-sensitive phenomena include sandhi processes (assimilations, lenitions, fortitions, deletions, insertions, and so on) involving segments and tones. For example, a sandhi rule of Bengali phrasal phonology is /r/ assimilation, where /r/ can optionally assimilate to a following coronal consonant (Hayes & Lahiri, 1991). This process is sensitive to PhP domains, as assimilation is only obtained within the PhP. In European Portuguese, vowel sandhi processes such as vowel deletion or semivocalization apply within the IP (Frota, 2000). An illustration of tonal rules sensitive to phonological phrasing is provided by many Bantu languages, like Chicheŵa (Kanerva, 1990; Truckenbrodt, 1995). For example, a rule of tone retraction shifts the tone of the final syllable to the second mora of the penultimate syllable when the final syllable ends a PhP. The phonology and phonetic realization of segments and tones have also been shown to be sensitive to prosodic constituent-initial strengthening at the PW and phrasal levels (Keating et al., 2003; Vigário, 2003; Pan, 2007). Similarly, prosodic constituent-final lengthening or shortening has been described in many languages (Nespor & Vogel, 1986; Turk & Shattuck-Hufnagel, 2007).

Rhythmic and intonation phenomena are also sensitive to prosodic structure. Stress clash resolution strategies, for example, apply within a given prosodic domain and are constrained by prosodic conditions on the organization of PW and phrase-level prominences (Nespor & Vogel, 1989; Frota, 2000; Prieto, 2005). The distribution of intonation events and the realization of their tonal targets also reflect prosodic structure. The relation between pitch accents and prosodic constituency may vary across languages, so that a pitch accent is required on each PW, as in Cairene Arabic (Hellmuth, 2007); on each Prosodic Word Group, as in Brazilian Portuguese (Vigário & Fernandes-Svartman, 2010); on each PhP, as in Bengali (Hayes & Lahiri, 1991) and Northern European Portuguese (Vigário & Frota, 2003); or on each IP, as in Standard European Portuguese (Frota, 2000, 2014). Languages with a lower prosodic domain for accentuation thus show a denser distribution of pitch accents, whereas languages with a higher prosodic domain for accentuation show a sparse distribution of pitch accents. Pitch scaling, pitch reset, and final lowering are phenomena that also refer to prosodic constituency (Ladd, 1996; Truckenbrodt, 2007b). This is shown, for example, by pitch scaling in German, where prosodic constituents define reference lines for tone height and partial reset is found at the left edge of IP (Truckenbrodt, 2002; Truckenbrodt & Féry, 2015).

Stress assignment has also been described as evidence for prosodic structure. For example, in Yidin the presence of stress shows that two consecutive suffixes (like [daga-ɲu] in [gumári-dagá-ɲu] ‘to have become red’) behave as a PW (Nespor & Vogel, 1986). In Chimwiini, vowel length only surfaces in words located at the right edge of a PhP, as this is the position of phrasal stress, and thus phrasal stress identifies the right edge of this prosodic constituent (Selkirk, 2011). More generally, and depending on the language, the location of prosodic prominence is related to the right or left edge of specific prosodic constituents (like the PW, PhP, or IP; Nespor & Vogel, 1986; Selkirk, 2011).

This wide range of phenomena provides potential cues to prosodic structure. Such cues show great variation across languages, given that they may be present or absent in a given language, and vary with respect to the types of prosodic categories they signal across languages. Zerbian’s (2007) study of cues to prosodic phrasing across Bantu languages illustrates how different cues can signal the same level of constituency in different languages (e.g., the IP is cued by blocking of high tone at the right edge in Northern Sotho, and by deletion of high tones within the IP domain in Kinyambo), and how the same phenomena may cue boundaries of different constituent types (e.g., penultimate lengthening signals the PhP in Chicheŵa, but the IP in Northern Soho). Final lengthening at the IP level, although widespread, is apparently absent in some languages, like Chimwiini, Estonian, and Finnish. In Finnish, the preservation of the language-specific quantity system seems to constrain final lengthening (Nakai et al., 2009). Prosodic structure-sensitive phenomena thus appear to be mostly driven by language-particular phonologies.

4.2 Properties of Prosodic Structure-Sensitive Phenomena

Prosodic structure-sensitive phenomena have been identified as belonging to one of three kinds of prosodic rules or constraints (Selkirk, 1980; Nespor & Vogel, 1986): domain span, domain limit or domain edge, and domain juncture. The structural properties of these three kinds are described in Table 2, following the notation in Selkirk and Lee (2015a), where Ψ‎ and χ‎ stand for the phonological properties involved in the phenomenon and “(. . . )π‎” refers to the relevant prosodic structure conditions. Ψ‎ and χ‎ denote the different types of phonological phenomena that appeal to prosodic constituent structure, and include segmental features, tonal features, intonational features, representations of tonal associations, and representations of rhythmic structures (see section 4.1). π‎ is a variable over the set of prosodic category types in prosodic structure (see section 3.2).

Table 2. Structural Properties of Prosodic Rules/Constraints

Rule Type

Properties

Domain span

(. . .Ψ‎ . . .)π

Domain limit/edge

. . . χ‎ (Ψ‎. . .)π or (. . .Ψ‎)π χ‎ . . .

Domain juncture

(. . . (. . . χ‎)π2 (Ψ‎. . .)π2 . . .)π1

/r/ assimilation in Bengali (Hayes & Lahiri, 1991) and some of the vowel sandhi processes in European Portuguese (Frota, 2000) are examples of domain span rules respectively sensitive to the PhP and IP domains. In European Portuguese, semivocalization of stressless high vowels (i/u) followed by another vowel applies both within and across words and PhPs, but not across IPs, as shown by the [w] surface form of the underlined vowels in (o boato alastrou)IP ‘the rumor spread’ versus (alastrou o boato)IP (infelizmente)IP ‘the rumor spread, unfortunately.’

Tone retraction in Chicheŵa, which occurs at the right edge of PhP (Kanerva, 1990), and partial pitch reset in German, found at the left edge of IP (Truckenbrodt & Féry, 2015), illustrate domain edge rules. Prosodic constituent-initial strengthening and prosodic constituent-final lengthening or shortening phenomena (Hayes, 1989; Beckman & Edwards, 1990; Turk, 2012), which may be sensitive to different prosodic category types, are also instances of domain edge rules. In French, for example, different degrees of initial strengthening have been found for different prosodic constituents (PW, Accentual Phrase, IP; Keating et al., 2003).

Stress clash resolution strategies, at least in some accounts, require reference to more than one prosodic category type, as in the case of stress shift or deletion/weakening of the first stress in the clash between PW within the same PhP, or in the case of stress-strengthening when the adjacent stresses belong to different PhP within the same IP (Nespor & Vogel, 1989; Prieto, 2005). These illustrate domain juncture rules. Other examples of domain juncture rules, involving different prosodic category types from the syllable-foot levels to phrasal levels, are provided in Nespor and Vogel (1986) and Kula and Bickmore (2015).

Differing from earlier work, a more restrictive theory of sensitivity to prosodic structure has been proposed in Selkirk and Lee (2015a). In this approach, phonological phenomena may only refer to a single prosodic constituent. This theory, by excluding reference to multiple prosodic categories, excludes the possibility of domain juncture rules. It remains an open research question whether all prosodic structure-sensitive phenomena can be expressed with the structural descriptions in Table 2, and whether these may be minimally reduced to domain span and domain edge constraints.

A further research issue is whether a unified theory of the hierarchical organization of speech may provide a common understanding of the various manifestations of prosodic structure, that is, of all attested prosodic structure-sensitive phenomena, including prominence and intonation phenomena, segmental and tonal sandhi, speech timing, domain-initial strengthening, and so on. Proposals that differentiate accentual phenomena from other prosodic-sensitive phenomena (Gussenhoven & Rietveld, 1992; Himmelmann & Ladd, 2008) or rhythmic and speech timing phenomena from others (van der Hulst, 2009) are not infrequent. Work on the prosodic hierarchy has traditionally been rule-based (Selkirk, 1984, 1986; Nespor & Vogel, 1986; Hayes, 1989; Truckenbrodt, 1999), prominence-based (Hayes, 1984; Nespor, 1990; Beckman & Edwards, 1990, 1994), or intonation-based (Beckman & Pierrehumbert, 1986; Jun, 2005). There are still not many languages studied where several types of prosodic structure-sensitive phenomena are examined in relation to the possible set of prosodic category types that characterize the language. In studies where this is done, it is often the case that a single hierarchy of constituents accounts for the phenomena observed (Hayes & Lahiri, 1991; Jun, 1996; Frota, 2000, 2014; Árnason, 2009). An integrated theory of prosodic structure remains a goal to be pursued.

4.3 Other Evidence

Evidence for prosodic structure also stems from work in language processing and language development. Investigations of the processing of prosodic structure have shown that adult listeners are sensitive to different prosodic category types (e.g., Li & Yang, 2008, on Mandarin Chinese), and that prosodic structure plays a role in word segmentation, lexical processing, and the computation of syntactic structure (Cutler, Dahan, & Donselaar, 1997; Christophe et al., 2004; Millotte, Wales, & Christophe, 2007). Boundaries of prosodic phrases, in particular, are instrumental to lexical access and to establish perceptual groupings into constituent structure. Besides local cues to prosodic boundaries, other cues that signal hierarchical prosodic structures, such as intonation and speech timing, have been shown to be used in artificial grammar learning and to outrank segmental transitional probabilities (Langus et al., 2012). On the production side, prosodic boundaries have been shown to be related to speech planning and recovery in language processing (Watson & Gibson, 2004). Prosodic constituency, for example the number of PW within a phrase, also plays a role in language production planning (Wheeldon & Lahiri, 1997; Lahiri & Wheeldon, 2011). Infant listeners and young children are also sensitive to prosodic structure, including both higher and lower prosodic phrases (Jusczyk, Hohne, & Mandel, 1995; Gout, Christophe, & Morgan, 2004; Homae et al., 2007; Shukla, White, & Aslin, 2011). They are able to use prosodic boundaries to constrain lexical access and syntactic processing. They are also especially sensitive to units at the edges of prosodic phrases. Building on this information, models of language learning have shown that several aspects of prosodic structure may play a key role in language development (Gutman et al., 2015).

The findings from language processing and language development, as well as the continued appeal to behavioral and non-behavioral experimental methods and paradigms for the analysis of prosodic representations, promise to significantly contribute to the understanding of prosodic structure.

5. Interactions Between Syntax and Phonology

There are different models of the interaction between syntax and phonology. The classical model is an input-output model where syntax feeds phonology. Most approaches to the syntax–phonology interface have assumed this unidirectional model that predicts an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free (Nespor & Vogel, 1986; Selkirk, 1986; Selkirk & Lee, 2015a). They thus conform to Pullum and Zwicky’s (1988) Principle of Phonology-Free Syntax. Under this view, prosodic structure formation is post-syntactic. Other models allow for different kinds of interaction. Consistent with the unidirectional approach, the interaction is restricted to a set of specific syntactic phenomena (stylistic or discourse-related) that may be influenced by a limited set of phonological phenomena (heaviness and rhythm). This influence amounts to phonology filtering possible syntactic outputs (Guasti & Nespor, 1999). In contrast with the unidirectional approach, models of mutual interaction see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. These models are parallelist, with syntactic and phonological representations coexisting and being evaluated at the same time, and phonological constraints possibly outranking syntactic constraints (Zec & Inkelas, 1990; Inkelas & Zec, 1995; Samek-Lodovici, 2005; Bennett et al., 2016). The parallelist view is consistent with, though not compelled by, the minimalistic theory of syntax of Spell-out by phase (Selkirk & Lee, 2015a). Crucially, mutual interactions are proposed to only occur at the interface, between syntactic and prosodic representations, not fully fledged phonological representations. Therefore, syntax is not prosody-free, but a weaker form of phonology-free syntax is assumed.

Challenges to the classical input-output model come from data from several languages showing that phonological conditions may constrain word order. Different types of syntactic constructions have been described to be affected, such as topicalization, heavy NP shift/reordering of complements, scrambling, parenthetical placement, coordinate structures, and focus distribution (Zec & Inkelas, 1990; Schütze, 1994; Zubizarreta, 1998; Guasti & Nespor, 1999; Frota & Vigário, 2002; Samek-Lodovici, 2005; Skopeteas, Féry, & Asatiani, 2009; Lohmann, 2014; Agbayani, Golston, & Ishii, 2015). The phonological factors that constrain word order are prosodic conditions defined in terms of size, length, and weight. In most cases, the phonological constraints are edge-related, as they affect constituents placed at the right or left edges. For example, weight effects on topicalization in Serbo-Croatian constrain the topicalized phrase, placed at the left edge, whereas in European Portuguese weight effects affect the constituent that ends up at the right edge and not the topicalized phrase at the left edge (Zec & Inkelas, 1990; Frota & Vigário, 2002). Cases of prosodic movement of PW or PhP to the edges of prosodic constituents to meet prominence requirements, such as those described in Classical Greek, Latin, and Ukrainian (Agbayani & Golston, 2010, 2016; Teliga, Agbayani, & Golston, 2016), seem to follow similar prosodic restrictions. It is an open research question whether the size/weight effects on word order may be modeled as prosodic requirements on prosodic edges, along the lines of phonological markedness size constraints on edges (Elordieta et al., 2005; Prieto, 2005) and constraints enforcing edge-dependent asymmetries (Selkirk, 2011; see section 3.1).

Other data that have challenged the serialist classical model include clitic distribution, as in second position clitics or other types of clitic placement due to prominence-related requirements (Halpern, 1995; Barbosa, 1996; Anderson, 2005; Elfner, 2012; Bennett et al., 2016). Yet another source of challenge comes from phenomena that involve segmental identity, together with reference to prosodic structure and aspects of syntactic representation. In Ancient Greek, constructions with two homophonous clitics within a PW are disfavored (Golston, 1995). Many languages show a phenomenon of deletion under identity, where a PW identical to another PW is deleted (Booij, 1985; Wiese, 1992; Peperkamp, 1997; Vigário & Frota, 2002). In European Portuguese, the phenomenon applies in coordinate structures, to the first element of the coordination, and the remnant segment after deletion needs to be a PW as well. For the case of clitic homophony, an alternative analysis in terms of lexical allomorphy has been proposed (Guasti & Nespor, 1999). However, such an analysis has not been extended to deletion under identity.

The nature of the interactions between syntax and phonology and their consequences for the architecture of grammar constitute one of the many questions for future research on the syntax–phonology interface.

6. Critical Analysis of Scholarship

In less than five decades of research on the syntax–phonology interface, an increasing body of work has examined the effects of syntactic representation on phonological (prosodic) representation, the properties of phonological representations, the properties of phonological phenomena sensitive to prosodic structure, and the possible influences of phonology on syntax. However, as shown by the theoretical diversity described above, the study of the syntax–phonology interface still raises many fundamental questions. This is unsurprising. The field is too young to have produced a systematic comparison among different proposals on the basis of empirical evidence that considers both the full range of prosodic representations and the various types of prosodic structure-sensitive phenomena, within and across languages. This would strengthen the evidence base of theories of prosodic phonology. The use of objective methods, in particular experimental methods for the analysis of prosodic representations in production, perception, and comprehension (Prieto, 2012), as well as corpus-based research, seems a necessary step to promote replication of research findings. In addition, theories of the syntax–phonology interface need to relate to evidence provided by findings from language processing, language learning, and language acquisition and development. These are excellent reasons why the syntax–phonology interface remains a challenging research field for the years to come.

Further Reading

  • Elordieta, G. (2008). An overview of theories of the syntax-phonology interface. Anuario del Seminario de Filología Vasca Julio de Urquijo, 42(1), 209–286.
  • Grijzenhout, J., & Kabak, B. (Eds.). (2009) Phonological domains: Universals and deviations. Berlin: Mouton de Gruyter.
  • Inkelas, S., & Zec, D. (Eds.). (1990) The phonology-syntax connection. Chicago: University of Chicago Press.
  • Kaisse, E. (1985). Connected speech: The interaction of syntax and phonology. San Diego: Academic Press.
  • Kager, R., & Zoonevelt, W. (Eds.). (1999). Phrasal phonology. Nijmegen: Nijmegen University Press.
  • Nespor, M., & Vogel, I. (1986). Prosodic phonology. Dordrecht: Foris.
  • Pullum, G. K., & Zwicky, A. M. (1988). The syntax-phonology interface. In F. J. Newmeyer (Ed.), Linguistics: The Cambridge survey, Vol. 1: Linguistic theory: Foundations (pp. 255–280). Cambridge, UK: Cambridge University Press.
  • Selkirk, E. (1984). Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press.
  • Selkirk, E. (2011). The syntax-phonology interface. In J. Goldsmith, J. Riggle, & A. C. L. Yu (Eds.), The handbook of phonological theory (2nd ed., pp. 435–484). Malden, MA: Wiley-Blackwell.
  • Selkirk, E., & Lee, S. J. (Eds.). (2015b). Constituency in sentence phonology [Special issue]. Phonology, 32(1).
  • Truckenbrodt, H. (2007a). The syntax-phonology interface. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 435–456). Cambridge, UK: Cambridge University Press.

References

  • Agbayani, B., & Golston, C. (2016). Phonological movement in Latin. Phonology, 33(1), 1–42.
  • Agbayani, B., & Golston, C. (2010). Phonological movement in Classical Greek. Language, 86(1), 133–167.
  • Agbayani, B., Golston, C., & Ishii, T. (2015). Syntactic and prosodic scrambling in Japanese. Natural Language and Linguistic Theory, 33(1), 47–77.
  • Anderson, S. (2005). Aspects of the theory of clitics. Oxford: Oxford University Press.
  • Árnason, K. (2009). Phonological domains in Modern Icelandic. In J. Grijzenhout & B. Kabak (Eds.), Phonological domains: Universals and deviations (pp. 283–313). Berlin: Mouton de Gruyter.
  • Barbosa, P. (1996). Clitic placement in European Portuguese and the position of subjects. In A. Halpern & A. Zwicky (Eds.), Approaching second: Second position clitics and related phenomena (pp. 1–40). Stanford: CSLI Publications.
  • Beckman, M. E. (1996). The parsing of prosody. Language and Cognitive Processes, 11, 17–67.
  • Beckman, M., & Edwards, J. (1990). Lengthenings and shortenings and the nature of prosodic constituency. In J. Kingston & M. Beckman (Eds.), Papers in laboratory phonology I (pp. 152–178). Cambridge, UK: Cambridge University Press.
  • Beckman, M., & Edwards, J. (1994). Articulatory evidence for differentiating stress categories. In P. Keating (Ed.), Papers in laboratory phonology III (pp. 7–33). Cambridge, UK: Cambridge University Press.
  • Beckman, M., & Pierrehumbert, J. (1986). Intonational structure in Japanese and English. Phonology Yearbook, 3, 255–310.
  • Bennett, R., Elfner, E., & McCloskey, J. (2016). Lightest to the right: An apparently anomalous displacement in Irish. Linguistic Inquiry, 47(2), 169–234.
  • Bickmore, L. (1990). Branching nodes and prosodic categories. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 1–17). Chicago: University of Chicago Press.
  • Borowsky, T., Kawahara, S., Shinya, T., & Sugahara, M. (Eds.). (2012). Prosody matters: Essays in honor of Elisabeth Selkirk. Sheffield: Equinox.
  • Booij, G. (1985). Coordination reduction in complex words: A case for prosodic phonology. In H. van der Hulst & N. Smith (Eds.), Advances in nonlinear phonology (pp. 143–160). Dordrecht: Foris.
  • Booij, G. (1996). Cliticization as prosodic integration: The case of Dutch. The Linguistic Review, 13, 219–242.
  • Chomsky, N. (2001). Derivation by phase. In M. Kenstowicz (Ed.), Ken Hale: A life in language (pp. 1–52). Cambridge, MA: MIT Press.
  • Chomsky, N., & Halle, M. (1968). The sound pattern of English. New York: Harper and Row.
  • Chen, M. (1987). The syntax of Xiamen tone sandhi. Phonology Yearbook, 4, 109–149.
  • Chen, M. (1990). What must phonology know about syntax? In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 19–46). Chicago: University of Chicago Press.
  • Cheng, L., & Downing, L. (2012). Prosodic domains do not match spell-out domains. McGill Working Papers in Linguistics, 22(1), 1–14.
  • Cho, Y.-M. Y. (1990). Syntax and phrasing in Korean. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 47–62). Chicago: University of Chicago Press.
  • Christophe, A., Peperkamp, S., Pallier, C., Block, E., & Mehler, J. (2004). Phonological phrase boundaries constrain lexical access: I. Adult data. Journal of Memory and Language, 51, 523–547.
  • Christophe, A., Millotte, S., Bernal, S., & Lidz, J. 2008. Bootstrapping lexical and syntactic acquisition. Language and Speech, 51(1&2), 61–75.
  • Condoravdi, C. (1990). Sandhi rules of Greek and prosodic theory. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 63–84). Chicago: University of Chicago Press.
  • Cooper, W., & Paccia-Cooper, J. (1980). Syntax and speech. Cambridge, MA: Harvard University Press.
  • Cooper, W., & Sorensen, J. (1977). Fundamental frequency contours at syntactic boundaries. Journal of the Acoustical Society of America, 62, 683–692.
  • Cutler, A., Dahan, D., & Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141–201.
  • D’Imperio, M., Gorka, E., Frota, S., Prieto, P., & Vigário, M. (2005). Intonational phrasing in Romance: The role of prosodic and syntactic structure. In S. Frota, M. Vigário, & M. J. Freitas (Eds.), Prosodies: With special reference to Iberian languages (pp. 59–97). Berlin: Mouton de Gruyter.
  • Delais-Roussarie, E. (1996). Phonological phrasing and accentuation in French. In M. Nespor & N. Smith (Eds.), Dam phonology: HIL phonology paper II (pp. 1–38). The Hague: Holland Academic Graphics.
  • Delais-Roussarie, E., Post, B., Avanzi, M., Buthke, C., Di Cristo, A., Feldhausen, I., . . . Yoo, H.-Y. (2015). Intonational phonology of French: Developing a ToBI system for French. In S. Frota & P. Prieto (Eds.), Intonation in Romance (pp. 63–100). Oxford: Oxford University Press.
  • Dobashy, Y. (2003). Phonological phrasing and syntactic derivation (Unpublished doctoral dissertation). Cornell University, Ithaca.
  • Dobashy, Y. (2016). A prosodic domain = A spell-out domain? In H. Tokizaki (Ed.), Phonological externalization (Vol. 1, pp. 11–22). Sapporo: Sapporo University.
  • Downing, L. J. (1999). Prosodic stem≠prosodic word in Bantu. In T. A. Hall & U. Kleinhenz (Eds.), Studies on the phonological word (pp. 73–98). Amsterdam: John Benjamins.
  • Downing, L. J., & Kadenge, M. (2015) Prosodic stems in Zezuru Shona. Southern African Linguistics and Applied Language Studies, 33(3), 291–305.
  • Dresher, B. E., & van der Hulst, H. (1998). Head-dependent asymmetries in phonology: Complexity and visibility. Phonology, 15, 317–352.
  • Elfner, E. (2012). Syntax-prosody interactions in Irish (Unpublished doctoral dissertation). University of Massachusetts at Amherst.
  • Elordieta, G. (2015). Recursive phonological phrasing in Basque. Phonology, 32(1), 49–78.
  • Elordieta, G., Frota, S., & Vigário, M. (2005). Subjects, objects and intonational phrasing in Spanish and Portuguese. Studia Linguistica, 59, 110–143.
  • Frota, S. (2000). Prosody and focus in European Portuguese: Phonological phrasing and intonation. New York: Garland.
  • Frota, S. (2012). Prosodic structure, constituents and their implementation. In A. Cohn, C. Fougeron, & M. Huffman (Eds.), The Oxford handbook of laboratory phonology (pp. 255–265). Oxford: Oxford University Press.
  • Frota, S. (2014). The intonational phonology of European Portuguese. In S.-A. Jun (Ed.), Prosodic typology II (pp. 6–42). Oxford: Oxford University Press.
  • Frota, S., & Prieto, P. (Eds.). (2007). Prosodic phrasing and tunes [Special issue]. The Linguistic Review, 24(2–3).
  • Frota, S., & Vigário, M. (2002). Efeitos de peso no Português Europeu. In M. H. Mateus & C. N. Correia (Eds.), Saberes no tempo: Homenagem a Maria Henriqueta Costa Campos (pp. 315–333). Lisboa: Colibri.
  • Ghini, M. (1993). Φ‎-formation in Italian: A new proposal. Toronto Working Papers in Linguistics, 12(2), 41–79.
  • Golston, C. (1995). Syntax outranks phonology: Evidence from Ancient Greek. Phonology, 12(3), 343–368.
  • Gout, A., Christophe, A., & Morgan, J. L. (2004). Phonological phrase boundaries constrain lexical access: II. Infant data. Journal of Memory and Language, 51, 547–567.
  • Gussenhoven, C. (2015). Suprasegmentals. In J. D. Wright (Ed.), International encyclopedia of the social & behavioral sciences (2nd ed., Vol. 23, pp. 714–721). Oxford: Elsevier.
  • Gussenhoven, C., & Rietveld, T. (1992). Intonation contours, prosodic structure and preboundary lengthening. Journal of Phonetics, 20, 283–303.
  • Guasti, M.-T., & Nespor, M. (1999). Is syntax phonology-free? In R. Kager, & W. Zonneveld (Eds.), Phrasal phonology (pp. 73–98). Nijmegen, Netherlands: Nijmegen University Press.
  • Guasti, M. T., Nespor, M., Christophe, A., & van Oyen, B. (2000). Pre-lexical setting of the head complement parameter through prosody. In J. Weissenborn & B. Hoehle (Eds.), Approaches to bootstrapping (pp. 231–248). Amsterdam: John Benjamins.
  • Gutman, A., Dautriche, I., Crabbé, B., & Christophe, A. (2015). Bootstrapping the prosodic bootstrapper: Probabilistic labeling of prosodic phrases. Language Acquisition, 22, 285–309.
  • Hall, T. Alan. (1999). Phonotactics and the prosodic structure of German function words. In T. A. Hall & U. Kleihenz (Eds.), Studies on the phonological word (pp. 99–131). Amsterdam: John Benjamins.
  • Halpern, A. (1995). On the placement and morphology of clitics. Stanford: CSLI Publications.
  • Hamlaoui, F., & Szendroi, K. (2015). A flexible approach to the mapping of intonational phrases. Phonology, 32(1), 79–110.
  • Hayes, B. (1984). The phonology of rhythm in English, Linguistic Inquiry, 15, 33–74.
  • Hayes, B. (1989). The prosodic hierarchy in meter. In P. Kiparsky & G. Youmans (Eds.), Rhythm and meter: Phonetics and phonology 1 (pp. 201–260). New York: Academic Press.
  • Hayes, B. (1990). Precompiled phrasal phonology. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 85–108). Chicago: University of Chicago Press.
  • Hayes, B., & Lahiri, A. (1991). Bengali intonational phonology. Natural Language and Linguistic Theory, 9, 47–96.
  • Hellmuth, S. (2007). The relationship between prosodic structure and pitch accent distribution: Evidence from Egyptian Arabic. The Linguistic Review, 24(2–3), 291–316.
  • Himmelmann, N. P., & Ladd, D. R. (2008). Prosodic description: An introduction for fieldworkers. Language Documentation and Conservation, 2(2), 244–274.
  • Homae, F., Watanabe, H., Nakano, T., & Taga, G. (2007). Prosodic processing in the developing brain. Neuroscience Research, 59, 29–39.
  • Horne, M., & van Oostendorp, M. (Eds.). (2005). Boundaries in intonational phonology [Special issue]. Studia Linguistica, 59 (2–3).
  • Idsardi, W. (1992). The computation of prosody (Unpublished doctoral dissertation). MIT.
  • Inkelas, S., & Zec, D. (1995). Syntax-phonology Interface. In J. Goldsmith (Ed.), The handbook of phonological theory (pp. 535–549). Cambridge, MA: Blackwell.
  • Ishihara, S. (2003). Intonation and interface conditions (Unpublished doctoral dissertation). MIT.
  • Ishihara, S. (2007). Major phrase, focus intonation, multiple spell-out (MaP, FI, MSO). The Linguistic Review, 24, 137–167.
  • Ito, J., & Mester, A. (2003). Weak Layering and word binarity. In T. Honma, M. Okasaki, T. Tabata, & S. Tanaka (Eds.), A new century of phonology and phonological theory: A festschrift for Professor Shosuke Haraguchi on the occasion of his 60th birthday (pp. 26–65). Tokyo: Kaitakusha.
  • Ito, J., & Mester, A. (2009). The extended prosodic word. In J. Grijzenhout & B. Kabak (Eds.), Phonological domains: Universals and deviations (pp. 135–194). Berlin: Mouton de Gruyter.
  • Ito, J., & Mester, A. (2012). Recursive prosodic phrasing in Japanese. In T. Borowsky, S. Kawahara, T. Shinya, & M. Sugahara (Eds.), Prosody matters: Essays in honor of Elisabeth Selkirk (pp. 280–303). London: Equinox.
  • Ito, J., & Mester, A. (2013). Prosodic subcategories in Japanese. Lingua, 124, 20–40.
  • Jun, S.-A. (1996). The phonetics and phonology of Korean prosody: Intonational phonology and prosodic structure. New York: Garland.
  • Jun, S.-A. (1998). The accentual phrase in the Korean prosodic hierarchy. Phonology, 15(2), 189–226.
  • Jun, S.-A. (2003). The effect of phrase length and speech rate on prosodic phrasing. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (pp. 483–486). Barcelona: UAB.
  • Jun, S.-A. (Ed.). (2005). Prosodic typology: The phonology of intonation and phrasing. Oxford: Oxford University Press.
  • Jusczyk, P. W., Hohne, E. A., & Mandel, D. R. (1995). Picking up regularities in the sound structure of the native language. In W. Strange (Ed.), Speech perception and linguistic experience: Issues in cross-language speech research (pp. 91–119). Baltimore: York Press.
  • Kabak, B., & Revithiadou, A. (2009). An interface approach to prosodic word recursion. In J. Grijzenhout & B. Kabak (Eds.), Phonological domains: Universals and deviations (pp. 15–46). Berlin: Mouton de Gruyter.
  • Kaisse, E. (1977). Hiatus in Modern Greek (Unpublished doctoral dissertation). Harvard University, Cambridge, MA.
  • Kaisse, E. (1990). Towards a typology of postlexical rules. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 127–143). Chicago: University of Chicago Press.
  • Kahnemuyipour, A. (2004). Syntactic categories and Persian stress (Unpublished doctoral dissertation). University of Toronto.
  • Kanerva, J. M. (1990). Focusing on phonological phrases in Chichewa. In S. Inkelas & D. Zec The phonology-syntax connection (pp. 145–161). Chicago: University of Chicago Press.
  • Keating, P., Cho, T., Fougeron, C., & Hsu, C. (2003). Domain-initial articulatory strengthening in four languages. In J. Local, R. Ogden, & R. Temple (Eds.), Laboratory phonology 6 (pp. 145–163). Cambridge, UK: Cambridge University Press.
  • Kratzer, A., & Selkirk, E. (2007). Phase theory and prosodic spell-out. Linguistic Review, 24(2–3), 93–135.
  • Kubozono, H. (1993). The organization of Japanese prosody. Tokyo: Kurosio.
  • Kula, N. C., & Bickmore, L. (2015). Phrasal phonology in Copperbelt Bemba. Phonology, 32(1), 147–176.
  • Ladd, D. R. (1996). Intonational phonology. Cambridge, UK: Cambridge University Press.
  • Lahiri, A., & Wheeldon, L. (2011). Phonological trochaic grouping in language planning and language change. In S. Frota, G. Elordieta, & P. Prieto (Eds.), Prosodic categories: Production, perception and comprehension (pp. 17–38). Dordrecht: Springer.
  • Langus, A., Marchetto, E., Bion, R., & Nespor, M. (2012). Can prosody be used to discover hierarchical structure in continuous speech? Journal of Memory and Language, 66, 285–306.
  • Lehiste, I. (1973). Phonetic disambiguation of syntactic ambiguity. Glossa, 7, 107–122.
  • Li, W., & Yang, Y. (2008). Perception of prosodic hierarchical boundaries in Mandarin Chinese sentences. Neuroscience, 158(4), 1416–1425.
  • Lohmann, A. (2014). English coordinate constructions. Cambridge, UK: Cambridge University Press.
  • Millotte, S., René, A., Wales, R., & Christophe, A. (2008). Phonological phrase boundaries constrain the on-line syntactic analysis of spoken sentences. Journal of Experimental Psychology: Learning, Memory & Cognition, 34, 874–885.
  • Millotte, S., Wales, R., & Christophe, A. (2007). Phrasal prosody disambiguates syntax. Language and Cognitive Processes, 22(6), 898–909.
  • Morgan, J. L., & Demuth, K. (1996). Signal to syntax: An overview. In J. L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 1–22). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Myrberg, S. (2013). Sisterhood in prosodic branching. Phonology, 30(1), 73–124.
  • Napoli, D. J., & Nespor, M. (1979). The syntax of word-initial consonant gemination in Italian. Language, 55, 812–841.
  • Neeleman, A., & van de Koot, J. (2006). On syntactic and phonological representations. Lingua, 116, 1524–1552.
  • Nakai, S., Kunnari, S., Turk, A., Suomi, K., & Ylitalo, R. (2009). Utterance-final lengthening and quantity in Northern Finnish. Journal of Phonetics, 39, 29–45.
  • Nespor, M. (1990). On the separation of prosodic and rhythmic phonology. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 243–258). Chicago: University of Chicago Press.
  • Nespor, M., & Vogel, I. (1982). Prosodic domains of external sandhi rules. In H. van der Hulst & N. Smith (Eds.), The structure of phonological representations (Vol. I, pp. 225–255). Dordrecht: Foris.
  • Nespor, M., & Vogel, I. (1989). On clashes and lapses. Phonology, 6, 69–116.
  • Odden, D. (1987). Kimatuumbi phrasal phonology. Phonology Yearbook, 4, 13–26.
  • Odden, D. (1990). Syntax, lexical rules and postlexical rules in Kimatuumbi. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 259–277). Chicago: University of Chicago Press.
  • Pak, M. (2005). Explaining branchingness effects in phrasal phonology. Proceedings of the West Coast Conference in Formal Linguistics, 24, 308–316.
  • Pak, M. (2008). The postsyntactic derivation and its phonological reflexes (Unpublished doctoral dissertation). University of Pennsylvania.
  • Pan, H.-h. (2007) Initial strengthening of lexical tones in Taiwanese Min. In C. Gussenhoven & T. Riad (Eds.), Tones and tunes, Volume 1: Typological studies in word and sentence prosody (pp. 271–292). Berlin: Mouton de Gruyter.
  • Peperkamp, S. (1997). Prosodic words. HIL Dissertations 34. The Hague: Holland Academic Graphics.
  • Pierrehumbert, J., & Beckman, M. (1988). Japanese tone structure. Cambridge, MA: MIT Press.
  • Potts, C. (2005). The logic of conventional implicatures. Oxford: Oxford University Press.
  • Prieto, P. (2005). Syntactic and eurhythmic constraints on phrasing decisions. Studia Linguistica, 59, 194–222.
  • Prieto, P. (2012). Experimental methods and paradigms for prosodic analysis. In A. C. Cohn, C. Fougeron, & M. K. Huffman (Eds.), Handbook in laboratory phonology (pp. 528–538). Oxford: Oxford University Press.
  • Rizzi, L. (1997). The fine structure of the left periphery. In L. Haegeman (Ed.), Elements of grammar: Handbook in generative syntax (pp. 75–116). Dordrecht: Kluwer.
  • Rotenberg, J. (1978). The syntax of phonology (Unpublished doctoral dissertation). MIT.
  • Samek-Lodovici, V. (2005). Prosody–syntax interaction in the expression of focus. Natural Language & Linguistic Theory, 23(3), 687–755.
  • Samuels, B. (2009). The structure of phonological theory (Unpublished doctoral dissertation). Harvard University, Cambridge, MA.
  • Samuels, B. (2010). Phonological derivation by phase: Evidence from Basque. Proceedings of PLC 33 (PWPL 16.1), 166–175.
  • Sândalo, F., & Truckenbrodt, H. (2002). Some notes on phonological phrasing in Brazilian Portuguese. Delta, 18, 1–30.
  • Schütze, C. (1994). Serbo-Croatian second position clitic placement. MIT Working Papers in Linguistics, 21, 373–473.
  • Seidl, A. (2001). Minimal indirect reference: A theory of the syntax-phonology interface. London: Routledge.
  • Selkirk, E. (1974). French liaison and the X̄ notation. Linguistic Inquiry, 5, 573–590.
  • Selkirk, E. (1980). Prosodic domains in phonology: Sanskrit revisited. In M. Aronoff & R. T. Oehrle (Eds.), Language sound structure (pp. 107–136). Cambridge, MA: MIT Press.
  • Selkirk, E. (1981). The phrase phonology of English and French. Bloomington: Indiana University Linguistics Club.
  • Selkirk, E. (1986). On derived domains in sentence phonology. Phonology Yearbook, 3, 371–405.
  • Selkirk, E. (1990). On the nature of prosodic constituency: Comments on Beckman and Edward’s paper. In J. Kingston & M. Beckman (Eds.), Papers in laboratory phonology I (pp. 179–200). Cambridge, UK: Cambridge University Press.
  • Selkirk, E. (1995). The prosodic structure of function words. In J. N. Beckman, L. W. Dickey, & S. Urbanczyk (Eds.), University of Massachusetts occasional papers in linguistics, Vol. 18: Papers in optimality theory (pp. 439–469). Amherst: GLSA, University of Massachusetts.
  • Selkirk, E. (2000). The interaction of constraints on prosodic phrasing. In M. Horne (Ed.), Prosody: Theory and experiment (pp. 231–261). Dordrecht: Kluwer Academic.
  • Selkirk, E. (2005). Comments on intonational phrasing. In S. Frota, M. Vigário, & M. J. Freitas (Eds.), Prosodies: With special reference to Iberian languages (pp. 11–58). Berlin: Mouton de Gruyter.
  • Selkirk, E., & Tateishi, K. (1988). Syntax and downstep in Japanese. In C. Georgopoulos & R. Ishihara (Eds.), Essays in honour of S.-Y. Kuroda (pp. 519–543). Dordrecht: Kluwer.
  • Selkirk, E., & Lee, S. J. (2015a). Constituency in sentence phonology: An introduction. Phonology, 32(1), 1–18.
  • Selkirk, E., & Shen, T. (1990). Prosodic domains in Shanghai Chinese. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 313–337). Chicago: University of Chicago Press.
  • Shattuck-Hufnagel, S., & Turk, A. (1996). A prosody tutorial for investigators of auditory sentence processing. Journal of Psycholinguistic Research, 25, 193–247.
  • Shukla, M., White, K. S., & Aslin, R. N. (2011). Prosody guides the rapid mapping of auditory word forms onto visual objects in 6-mo-old infants. Proceedings of the National Academy of Sciences, 108(15), 6038–6043.
  • Skopeteas, S., Féry, C., & Asatiani, R. (2009). Word order and intonation in Georgian. Lingua, 119(1), 102–127.
  • Steedman, M. (1991). Structure and intonation. Language, 67, 262–296.
  • Teliga, V., Agbayani, B., & Golston, C. (2016). Phonological movement in Ukrainian. Proceedings of AMP, 2015, 1–10.
  • Tenani, L. (2002). Domínios prosódicos no Português (Unpublished doctoral dissertation). Universidade Estadual de Campinas.
  • Truckenbrodt, H. (1995). Phonological phrases: Their relation to syntax, focus and prominence (Unpublished doctoral dissertation). MIT.
  • Truckenbrodt, H. (1999). On the relation between syntactic phrases and phonological phrases. Linguistic Inquiry, 30, 219–255.
  • Truckenbrodt, H. (2002). Upstep and embedded register levels. Phonology, 19(1), 77–120.
  • Truckenbrodt, H. (2007b). Upstep on edge tones and on nuclear accents. In C. Gussenhoven & T. Riad (Eds.) Tones and tunes, Vol, 2: Experimental studies in word and sentence prosody (pp. 349–386). Berlin: Mouton de Gruyter.
  • Truckenbrodt, H. (2015). Intonation phrases and speech acts. In M. Kluck, D. Ott, & M. de Vries (Eds.), Parenthesis and ellipsis: Cross-linguistic and theoretical perspectives (pp. 301–349). Berlin: Walter de Gruyter.
  • Truckenbrodt, H., & Féry, C. (2015). Hierarchical organisation and tonal scaling. Phonology, 32(1), 19–47.
  • Turk, A. (2012). The temporal implementation of prosodic structure. In A. Cohn, C. Fougeron, & M. Huffman (Eds.), The Oxford handbook of laboratory phonology (pp. 242–253). Oxford: Oxford University Press.
  • Turk, A. E., & Shattuck-Hufnagel, S. (2007). Multiple targets of phrase-final lengthening in American English words. Journal of Phonetics, 35(4), 445–472.
  • Uriagereka, J. (1999). Multiple spell-out. In S. Epstein & N. Hornstein (Eds.), Working minimalism (pp. 251–282). Cambridge, MA: MIT Press.
  • Van der Hulst, H. (2009). Two phonologies. In J. Grijzenhout & B. Kabak (Eds.), Phonological domains: Universals and deviations (pp. 315–352). Berlin: Mouton de Gruyter.
  • Van der Hulst, H. (2010). A note on recursion in phonology. In H. Van der Hulst (Ed.), Recursion and human language (pp. 301–341). Berlin: De Gruyter Mouton.
  • Vigário, M. (2003). The prosodic word in European Portuguese. Berlin: Mouton de Gruyter.
  • Vigário, M. (2010). Prosodic structure between the prosodic word and the phonological phrase: Recursive nodes or an independent domain? The Linguistic Review, 27(4), 485–530.
  • Vigário, M., & Fernandes-Svartman, F. (2010). A atribuição tonal em compostos no Português do Brasil. In A. M. Brito, F. Silva, J. Veloso, & A. Fiéis (Eds.), XXV encontro nacional da Associação Portuguesa de Linguística: Textos seleccionados (pp. 769–786). Porto: Associação Portuguesa de Linguística.
  • Vigário, M., & Frota, S. (2002). Prosodic word deletion in coordinate structures. Journal of Portuguese Linguistics, 1(2), 241–264.
  • Vigário, M., & Frota, S. (2003). The intonation of Standard and Northern European Portuguese. Journal of Portuguese Linguistics, 2(2), 115–137.
  • Vogel, I. (2009). The status of the clitic group. In J. Grijzenhout & B. Kabak (Eds.), Phonological domains: Universals and deviations (pp. 15–46). Berlin: Mouton de Gruyter.
  • Zec, D., & Inkelas, S. (1990). Prosodically Constrained Syntax. In S. Inkelas & D. Zec (Eds.), The phonology-syntax connection (pp. 365–378). Chicago: University of Chicago Press.
  • Zerbian, S. (2007). Phonological phrasing in Northern Sotho (Bantu). The Linguistic Review, 24, 233–262.
  • Zubizarreta, M. L. (1998). Prosody, focus, and word order. Cambridge, MA: MIT Press.
  • Wagner, M., & Watson, D. (2010). Experimental and theoretical advances in prosody: A review. Language and Cognitive Processes, 25(7–9), 905–945.
  • Watson, D., & Gibson, E. (2004). The relationship between intonational phrasing and syntactic structure in language production. Language and Cognitive Processes, 19(6), 713–755.
  • Wiese, R. (1992). Prosodic phonology and its role in the processing of written language. In G. Görz (Ed.), Konvens 92 (pp. 139–148). Heidelberg: Springer.
  • Wheeldon, L., & Lahiri, A. (1997). Prosodic units in speech production. Journal of Memory and Language, 37, 356–381.