The term rendaku, sometimes translated as sequential voicing, denotes a morphophonemic phenomenon in Japanese. In a prototypical case, an alternating morpheme appears with an initial voiceless obstruent as a word on its own or as the initial element (E1) in a compound but with an initial voiced obstruent as the second element (E2) in a two-element compound. For example, the simplex word /take/ ‘bamboo’ and the compound /take+yabu/ ‘bamboo grove’ (cf. /yabu/ ‘grove’) begin with voiceless /t/, but this morpheme meaning ‘bamboo’ begins with voiced /d/ in /sao+dake/ ‘bamboo (made into a) pole’ (cf. /sao/ ‘pole’). Rendaku was already firmly established in 8th-century Old Japanese (OJ), the earliest variety for which extensive written records exist, and subsequent sound changes have made the alternations phonetically heterogeneous. Many OJ compounds with eligible E2s did not undergo rendaku, and the phenomenon remains pervasively irregular in modern Japanese. There are, however, many factors that promote or inhibit rendaku, and some of these appear to influence native-speaker behavior on experimental tasks. The best known phonological factor is Lyman’s Law, according to which rendaku does not apply to E2s that contain a non-initial voiced obstruent. Many theoretical phonologists endorse the idea that Lyman’s Law is a sub-case of the Obligatory Contour Principle, which rules out identical or similar units if they would be adjacent in some domain. Other well-known factors involve vocabulary stratum (e.g., the resistance to rendaku of recently borrowed E2s) or the morphological/semantic relationship between E2 and E1 (e.g., the resistance to rendaku of coordinate compounds). Some morphemes are idiosyncratically immune to rendaku. Other morphemes alternate but undergo rendaku in some compounds while failing to undergo it in others, even though no known factor is relevant. In addition, many individual compounds vary between a form with rendaku and a form without, and this variability is often not reflected in dictionary entries. Despite its irregularity, rendaku is productive in the sense that it often applies to newly created compounds. Many compounds, of course, are stored (with or without rendaku) in a speaker’s lexicon, but fact that native speakers can apply rendaku not just to existing E2s in novel compounds but even to made-up E2s shows that rendaku as an active process is somehow incorporated into the grammar.
Article
Pier Marco Bertinetto
Speech rhythm is a popular research topic but a still poorly understood phenomenon. A critical assessment of the algorithmic tools developed in the last two decades to analyze rhythm in natural languages shows that they can at best lead to a topological arrangement of the languages to be compared, with no ambition to actually offer objective and absolute measures. Besides, all available tools are heavily influenced by any source of variability, in particular: speech rate, speech style (most notably, spontaneous vs. read), and even speaker identity. Although this shows their high sensitivity to the input details, it raises severe doubts as for the actual relevance of the comparative results obtained in the study of different languages. Future research will have to learn to overcome these weaknesses.
Most importantly, readers should be alerted to the false idol of a common Romance rhythmic footprint. Close inspection of the prosodic characteristics of the main Romance languages indicates that the differences are indeed remarkable and likely to feed diverging rhythmical behaviors. Besides, one should take into account the vast intrafamily variability, up to the tiniest local vernaculars, which often diverge in extraordinary ways from the ‘roof’ language supposed to constitute a sort of common denominator.
Article
Walter Breu
Albanian has been documented in historical texts only since the 16th century. In contrast, it had been in continuous contact with languages of the Latin phylum since the first encounters of Romans and Proto-Albanians in the 2nd century bce. Given the late documentation of Albanian, the different layers of matter borrowings from Latin and its daughter languages are relevant for the reconstruction of Proto-Albanian phonology and its development through the centuries. Latinisms also play a role in the discussion about the original home of the Albanians.
From the very beginning, Latin influence seems to have been all-embracing with respect to the lexical domain, including word formation and lexical calquing. This is true not only for Latin itself but also for later Romance, especially for Italian historical varieties, less so for now extinct Balkan-Romance vernaculars like Dalmatian, and doubtful for Romanian, whose similarities with Albanian had been strongly overestimated in the past. Many Latin-based words in Albanian have the character of indirect Latinisms, as they go back to originally Latin borrowings via Ancient (and Medieval) Greek, and there is also the problem of learned borrowings from Medieval Latin. As for other Romance languages, only French has to be considered as the source of fairly recent borrowings, often hardly distinguishable from Italian ones, due to analogical integration processes. In spite of 19th-century claims in this respect, Latin (and Romance) grammatical influence on Albanian is (next to) zero.
In Italo-Albanian varieties that have developed all over southern Italy since the late Middle Ages, based on a succession of immigration waves, Italian influence has been especially strong, not only with respect to the lexical domain but by interfering in some parts of grammar, too.
Article
Ocke-Schwen Bohn
The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.
Article
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
From a typological perspective, the phoneme inventories of Romance languages are of medium size: For instance, most consonant systems contain between 20 and 23 phonemes. An innovation with respect to Latin is the appearance of palatal and palato-alveolar consonants such as /ɲ ʎ/ (Italian, Spanish, Portuguese), /ʃ ʒ/ (French, Portuguese), and /tʃ dʒ/ (Italian, Romanian); a few varieties (e.g., Romansh and a number of Italian dialects) also show the palatal stops /c ɟ/. Besides palatalization, a number of lenition processes (both sonorization and spirantization) have characterized the diachronic development of plosives in Western Romance languages (cf. the French word chèvre “goat” < lat. CĀPRA(M)). Diachronically, both sonorization and spirantization occurred in postvocalic position, where the latter can still be observed as an allophonic rule in present-day Spanish and Sardinian. Sonorization, on the other hand, occurs synchronically after nasals in many southern Italian dialects.
The most fundamental change in the diachrony of the Romance vowel systems derives from the demise of contrastive Latin vowel quantity. However, some Raeto-Romance and northern Italo-Romance varieties have developed new quantity contrasts. Moreover, standard Italian displays allophonic vowel lengthening in open stressed syllables (e.g., /ˈka.ne/ “dog” → [ˈkaːne]. The stressed vowel systems of most Romance varieties contain either five phonemes (Spanish, Sardinian, Sicilian) or seven phonemes (Portuguese, Catalan, Italian, Romanian). Larger vowel inventories are typical of “northern Romance” and appear in dialects of Northern Italy as well as in Raeto- and Gallo-Romance languages. The most complex vowel system is found in standard French with its 16 vowel qualities, comprising the 3 rounded front vowels /y ø œ/ and the 4 nasal vowel phonemes /ɑ̃ ɔ̃ ɛ̃ œ̃/.
Romance languages differ in their treatment of unstressed vowels. Whereas Spanish displays the same five vowels /i e a o u/ in both stressed and unstressed syllables (except for unstressed /u/ in word-final position), many southern Italian dialects have a considerably smaller inventory of unstressed vowels as opposed to their stressed vowels.
The phonotactics of most Romance languages is strongly determined by their typological character as “syllable languages.” Indeed, the phonological word only plays a minor role as very few phonological rules or phonotactic constraints refer, for example, to the word-initial position (such as Italian consonant doubling or the distribution of rhotics in Ibero-Romance), or to the word-final position (such as obstruent devoicing in Raeto-Romance). Instead, a wide range of assimilation and lenition processes apply across word boundaries in French, Italian, and Spanish.
In line with their fundamental typological nature, Romance languages tend to allow syllable structures of only moderate complexity. Inventories of syllable types are smaller than, for example, those of Germanic languages, and the segmental makeup of syllable constituents mostly follows universal preferences of sonority sequencing. Moreover, many Romance languages display a strong preference for open syllables as reflected in the token frequency of syllable types. Nevertheless, antagonistic forces aiming at profiling the prominence of stressed syllables are visible in several Romance languages as well. Within the Ibero- Romance domain, more complex syllable structures and vowel reduction processes are found in the periphery, that is, in Catalan and Portuguese. Similarly, northern Italian and Raeto-Romance dialects have experienced apocope and/or syncope of unstressed vowels, yielding marked syllable structures in terms of both constituent complexity and sonority sequencing.
Article
Diane Brentari, Jordan Fenlon, and Kearsy Cormier
Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.
Article
Gerard Docherty
Sociophonetics research is located at the interface of sociolinguistics and experimental phonetics. Its primary focus is to shed new light on the social-indexical phonetic properties of speech, revealing a wide range of phonetic parameters that map systematically to social factors relevant to speakers and listeners, and the fact that many of these involve particularly fine-grained control of both spatial and temporal dimensions of speech production. Recent methodological developments in acoustic and articulatory methods have yielded new insights into the nature of sociophonetic variation at the scale of entire speech communities as well as in respect of the detailed speech production patterns of individual speakers. The key theoretical dimension of sociophonetic research is to consider how models of speech production, processing, and acquisition should be informed by rapidly increasing knowledge of the ubiquity of social-indexical phonetic variation carried by the speech signal. In particular, this work is focused on inferring from the performance of speakers and listeners how social-indexical phonetic properties are interwoven into phonological representation alongside those properties associated with the transmission and interpretation of lexical-propositional information.
Article
Isao Tokuda
In the source-filter theory, the mechanism of speech production is described as a two-stage process: (a) The air flow coming from the lungs induces tissue vibrations of the vocal folds (i.e., two small muscular folds located in the larynx) and generates the “source” sound. Turbulent airflows are also created at the glottis or at the vocal tract to generate noisy sound sources. (b) Spectral structures of these source sounds are shaped by the vocal tract “filter.” Through the filtering process, frequency components corresponding to the vocal tract resonances are amplified, while the other frequency components are diminished. The source sound mainly characterizes the vocal pitch (i.e., fundamental frequency), while the filter forms the timbre. The source-filter theory provides a very accurate description of normal speech production and has been applied successfully to speech analysis, synthesis, and processing. Separate control of the source (phonation) and the filter (articulation) is advantageous for acoustic communications, especially for human language, which requires expression of various phonemes realized by a flexible maneuver of the vocal tract configuration. Based on this idea, the articulatory phonetics focuses on the positions of the vocal organs to describe the produced speech sounds.
The source-filter theory elucidates the mechanism of “resonance tuning,” that is, a specialized way of singing. To increase efficiency of the vocalization, soprano singers adjust the vocal tract filter to tune one of the resonances to the vocal pitch. Consequently, the main source sound is strongly amplified to produce a loud voice, which is well perceived in a large concert hall over the orchestra.
It should be noted that the source–filter theory is based upon the assumption that the source and the filter are independent from each other. Under certain conditions, the source and the filter interact with each other. The source sound is influenced by the vocal tract geometry and by the acoustic feedback from the vocal tract. Such source–filter interaction induces various voice instabilities, for example, sudden pitch jump, subharmonics, resonance, quenching, and chaos.
Article
Andres M. Kristol
Occitan, a language of high medieval literary culture, historically occupies the southern third of France. Today it is dialectalized and highly endangered, like all the regional languages of France. Its main linguistic regions are Languedocien, Provençal, Limousin, Auvergnat, Vivaro-dauphinois (Alpine Provençal) and, linguistically on the fringes of the domain, Gascon. Despite its dialectalization, its typological unity and the profound difference that separates it from Northern Galloroman (Oïl dialects, Francoprovençal) and Gallo-Italian remain clearly perceptible. Its history is characterised by several ruptures (the Crusade against the Albigensians, the French Revolution) and several attempts at "rebirth" (the Baroque period, the Felibrige movement in the second half of the 19th century, the Occitanist movement of the 20th century). Towards the end of the Middle Ages, the Occitan koinè, a literary and administrative language integrating the main dialectal characteristics of all regions, was lost and replaced by makeshift regional spellings based on the French spelling. The modern Occitanist orthography tries to overcome these divisions by coming as close as possible to the medieval, "classical" written tradition, while respecting the main regional characteristics. Being a bridge language between northern Galloroman (Oïl varieties and Francoprovençal), Italy and Iberoromania, Occitan is a relatively conservative language in terms of its phonetic evolution from the popular spoken Latin of western Romania, its morphology and syntax (absence of subject clitics in the verbal system, conservation of a fully functional simple past tense). Only Gascon, which was already considered a specific language in the Middle Ages, presents particular structures that make it unique among Romance languages (development of a system of enunciative particles).
Article
Kodi Weatherholtz and T. Florian Jaeger
The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).
Article
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.
Article
Thomas W. Stewart
Segment-level alternations that realize morphological properties or that have other morphological significance stand either at an interface or along a continuum between phonology and morphology. The typical source for morphologically correlated sound alternations is the automatic phonology, interacting with discrete morphological operations such as affixation. Traditional morphophonology depends on the association of an alternation with a distinct concatenative marker, but the rise of stem changes that are in themselves morphological markers, be they inflectional or derivational, resides in the fading of phonetic motivation in the conditioning environment, and thus an increase in independence from historical phonological sources. The clearest cases are sole-exponent alternations, such as English man~men or slide~slid, but it is not necessary that the remainder of an earlier conditioning affix be entirely absent, only that synchronic conditioning is fully opaque. Once a sound-structural pattern escapes the unexceptional workings of a language's general phonological patterning, yet reliably serves a signifying function for one or more morphological properties, the morphological component of the grammar bears a primary if not sole responsibility for accounting for the pattern’s distribution.
It is not uncommon for the transition of analysis into morphology from (morpho)phonology to be a fitful one. There is an established tendency for phonological theory to hold sway in matters of sound generally, even at the expense of challenging learnability through the introduction of remote representations, ad hoc triggering devices, or putative rules of phonology of very limited generality. On the morphological side, a bias in favor of separable morpheme-like units and syntax-like concatenative dynamics has relegated relations like stem alternations to the margins, no matter how regular, productive, or distinct from general phonological patterns in the language in question overall. This parallel focus of each component on a "specialization" as it were has left exactly morphologically significant stem alternations such as Germanic Ablaut and Celtic initial-consonant mutation poorly served. In both families, these robust sound patterns generally lack reliable synchronic phonological conditioning. Instead, one must crucially refer to grammatical structure and morphological properties in order to account for their distributions. It is no coincidence that such stem alternations look phonological, just as fossils resemble the forms of the organisms that left them. The work of morphology likewise does not depend on alternant segments sharing aspects of sound, but the salience of the system may benefit from perceptible coherence of form. One may observe what sound relations exist between stem alternants, but it is neither necessary nor realistic to oblige a speaker/learner to generate established stem alternations anew from remote underlying representations, as if the alternations were always still arising; to do so constitutes a grafting of the technique of internal reconstruction as a recapitulating simulation within the synchronic grammar.
Article
Stela Manova
Subtraction consists in shortening the shape of the word. It operates on morphological bases such as roots, stems, and words in word-formation and inflection. Cognitively, subtraction is the opposite of affixation, since the latter adds meaning and form (an overt affix) to roots, stems, or words, while the former adds meaning through subtraction of form. As subtraction and affixation work at the same level of grammar (morphology), they sometimes compete for the expression of the same semantics in the same language, for example, the pattern ‘science—scientist’ in German has derivations such as Physik ‘physics’—Physik-er ‘physicist’ and Astronom-ie ‘astronomy’—Astronom ‘astronomer’. Subtraction can delete phonemes and morphemes. In case of phoneme deletion, it is usually the final phoneme of a morphological base that is deleted and sometimes that phoneme can coincide with a morpheme.
Some analyses of subtraction(-like shortenings) rely not on morphological units (roots, stems, morphological words, affixes) but on the phonological word, which sometimes results in alternative definitions of subtraction. Additionally, syntax-based theories of morphology that do not recognize a morphological component of grammar and operate only with additive syntactic rules claim that subtraction actually consists in addition of defective phonological material that causes adjustments in phonology and leads to deletion of form on the surface. Other scholars postulate subtraction only if the deleted material does not coincide with an existing morpheme elsewhere in the language and if it does, they call the change backformation. There is also some controversy regarding what is a proper word-formation process and whether what is derived by subtraction is true word-formation or just marginal or extragrammatical morphology; that is, the question is whether shortenings such as hypocoristics and clippings should be treated on par with derivations such as, for example, the pattern of science-scientist.
Finally, research in subtraction also faces terminology issues in the sense that in the literature different labels have been used to refer to subtraction(-like) formations: minus feature, minus formation, disfixation, subtractive morph, (subtractive) truncation, backformation, or just shortening.
Article
Erik M. Petzell
Swedish is a V2 language, like all Germanic except English, with a basic VO word order and a suffixed definite article, like all North Germanic. Swedish is the largest of the North Germanic languages, and the official language of both Sweden and Finland, in the latter case alongside the majority language Finnish. Worldwide, there are about 10.5 million first-language (L1) speakers. The extent of L2 Swedish speakers is unclear: In Sweden and Finland alone, there are at least 3 million L2 speakers. Genealogically, Swedish is closest to Danish. Together, they formed the eastern branch of North Germanic during the Viking age. Today, this unity of old is often obscured by later developments. Typologically, in the early 21st century, Swedish is closer to Norwegian than to Danish.
In the late 19th and early 20th centuries, there was great dialectal variation across the Swedish-speaking area. Very few of the traditional dialects have survived into the present, however. In the early 21st century, there are only some isolated areas, where spoken standard Swedish has not completely taken over, for example, northwestern Dalecarlia. Spoken standard Swedish is quite close to the written language. This written-like speech was promoted by primary school teachers from the late 19th century onward. In the 21st century, it comes in various regional guises, which differ from each other prosodically and display some allophonic variation, for example, in the realization of /r/.
During the late Middle Ages, Swedish was in close contact with Middle Low German. This had a massive impact on the lexicon, leading to loans in both the open and closed classes and even import of derivational morphology. Structurally, Swedish lost case and verbal agreement morphology, developed mandatory expletive subjects, and changed its word order in subordinate clauses. Swedish shares much of this development with Danish and Norwegian.
In the course of the early modern era, Swedish and Norwegian converged further, developing very similar phonological systems. The more conspicuous of the shared traits include two different rounded high front vowels, front /y/ and front-central /ʉ/, palatalization of initial /k/ and /g/ before front vowels, and a preserved phonemic tonal distinction.
As for morphosyntax, however, Swedish has sometimes gone its own way, distancing itself from both Norwegian and Danish. For instance, Swedish has a distinct non-agreeing active participle (supine), and it makes use of the morphological s-passive in a wider variety of contexts than Danish and Norwegian. Moreover, verbal particles always precede even light objects in Swedish, for example, ta upp den, literally ‘take up it’, while Danish and Norwegian patterns with, for example, English: tag den op/ta den opp, literally ‘take it up’. Furthermore, finite forms of auxiliary have may be deleted in subordinate clauses in Swedish but never in Danish/Norwegian.
Article
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Article
Erich R. Round
The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.
Article
This article introduces two phenomena that are studied within the domain of templatic morphology—clippings and word-and-pattern morphology, where the latter is usually associated with Semitic morphology. In both cases, the words are of invariant shape, sharing a prosodic structure defined in terms of number of syllables. This prosodic template, being the core of the word structure, is often accompanied with one or more of the following properties: syllable structure, vocalic pattern, and an affix. The data in this article, drawn from different languages, display the various ways in which these structural properties are combined to determine the surface structure of the word. The invariant shape of Japanese clippings (e.g., suto ← sutoraiki ‘strike’) consists of a prosodic template alone, while that of English hypocoristics (e.g., Trudy ← Gertrude) consists of a prosodic template plus the suffix -i. The Arabic verb classes, such as class-I (e.g., sakan ‘to live’) and class-II (e.g., misek ‘to hold’), display a prosodic template plus a vocalic pattern, and the Hebrew verb class-III (e.g., hivdil ‘to distinguish’) displays a prosodic template, a vocalic pattern and a prefix. Given these structural properties, the relation between a base and its derived form is expressed in terms of stem modification, which involves truncation (for the prosodic template) and melodic overwriting (for the vocalic pattern). The discussion in this article suggests that templatic morphology is not limited to a particular lexicon type – core or periphery, but it displays different degrees of restrictiveness.
Article
Paul de Lacy
Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain.
The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production.
There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries.
Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them.
Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them.
The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.
Article
Bert Remijsen
When the phonological form of a morpheme—a unit of meaning that cannot be decomposed further into smaller units of meaning—involves a particular melodic pattern as part of its sound shape, this morpheme is specified for tone. In view of this definition, phrase- and utterance-level melodies—also known as intonation—are not to be interpreted as instances of tone. That is, whereas the question “Tomorrow?” may be uttered with a rising melody, this melody is not tone, because it is not a part of the lexical specification of the morpheme tomorrow. A language that presents morphemes that are specified with specific melodies is called a tone language. It is not the case that in a tone language every morpheme, content word, or syllable would be specified for tone. Tonal specification can be highly restricted within the lexicon. Examples of such sparsely specified tone languages include Swedish, Japanese, and Ekagi (a language spoken in the Indonesian part of New Guinea); in these languages, only some syllables in some words are specified for tone. There are also tone languages where each and every syllable of each and every word has a specification. Vietnamese and Shilluk (a language spoken in South Sudan) illustrate this configuration. Tone languages also vary greatly in terms of the inventory of phonological tone forms. The smallest possible inventory contrasts one specification with the absence of specification. But there are also tone languages with eight or more distinctive tone categories. The physical (acoustic) realization of the tone categories is primarily fundamental frequency (F0), which is perceived as pitch. However, often other phonetic correlates are also involved, in particular voice quality. Tone plays a prominent role in the study of phonology because of its structural complexity. That is, in many languages, the way a tone surfaces is conditioned by factors such as the segmental composition of the morpheme, the tonal specifications of surrounding constituents, morphosyntax, and intonation. On top of this, tone is diachronically unstable. This means that, when a language has tone, we can expect to find considerable variation between dialects, and more of it than in relation to other parts of the sound system.
Article
Alexis Michaud and Bonny Sands
Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.