The field of study on the acquisition of phonological productive abilities by first-language learners in the Romance languages has been largely focused on three main languages: French, Portuguese, and Spanish, including various dialects of these languages spoken in Europe as well as in the Americas. In this article, we provide a comparative survey of this literature, with an emphasis on representational phonology. We also include in our discussion observations from the development of Catalan and Italian, and mention areas where these languages, as well as Romanian, another major Romance language, would provide welcome additions to our cross-linguistic comparisons. Together, the various studies we summarize reveal intricate patterns of development, in particular concerning the acquisition of consonants across different positions within the syllable, the word, and in relation to stress, documented from both monolingual and bilingual first-language learners can be found. The patterns observed across the different languages and dialects can generally be traced to formal properties of phone distributions, as entailed by mainstream theories of phonological representation, with variations also predicted by more functional aspects of speech, including phonetic factors and usage frequency. These results call for further empirical studies of phonological development, in particular concerning Romanian, in addition to Catalan and Italian, whose phonological and phonetic properties offer compelling grounds for the formulation and testing of models of phonology and phonological development.
Article
Acquisition of L1 Phonology in the Romance Languages
Yvan Rose, Laetitia Almeida, and Maria João Freitas
Article
Arthur Abramson
Philip Rubin
Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis.
Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.
Article
Articulatory Phonology
Marianne Pouplier
One of the most fundamental problems in research on spoken language is to understand how the categorical, systemic knowledge that speakers have in the form of a phonological grammar maps onto the continuous, high-dimensional physical speech act that transmits the linguistic message. The invariant units of phonological analysis have no invariant analogue in the signal—any given phoneme can manifest itself in many possible variants, depending on context, speech rate, utterance position and the like, and the acoustic cues for a given phoneme are spread out over time across multiple linguistic units. Speakers and listeners are highly knowledgeable about the lawfully structured variation in the signal and they skillfully exploit articulatory and acoustic trading relations when speaking and perceiving. For the scientific description of spoken language understanding this association between abstract, discrete categories and continuous speech dynamics remains a formidable challenge. Articulatory Phonology and the associated Task Dynamic model present one particular proposal on how to step up to this challenge using the mathematics of dynamical systems with the central insight being that spoken language is fundamentally based on the production and perception of linguistically defined patterns of motion. In Articulatory Phonology, primitive units of phonological representation are called gestures. Gestures are defined based on linear second order differential equations, giving them inherent spatial and temporal specifications. Gestures control the vocal tract at a macroscopic level, harnessing the many degrees of freedom in the vocal tract into low-dimensional control units. Phonology, in this model, thus directly governs the spatial and temporal orchestration of vocal tract actions.
Article
Bracketing Paradoxes in Morphology
Heather Newell
Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.
Article
Chinese Syllable Structure
Jisheng Zhang
Chinese is generally considered a monosyllabic language in that one Chinese character corresponds to one syllable and vice versa, and most characters can be used as free morphemes, although there is a tendency for words to be disyllabic. On the one hand, the syllable structure of Chinese is simple, as far as permissible sequences of segments are concerned. On the other hand, complexities arise when the status of the prenuclear glide is concerned and with respect to the phonotactic constraints between the segments. The syllabic affiliation of the prenuclear glide in the maximal CGVX Chinese syllable structure has long been a controversial issue.
Traditional Chinese phonology divides the syllable into shengmu (C) and yunmu, the latter consisting of medial (G), nucleus (V), and coda (X), which is either a high vowel (i/u) or a nasal (n/ŋ). This is known as the sheng-yun model, which translates to initial-final in English (IF in short). The traditional Chinese IF syllable model differs from the onset-rhyme (OR) syllable structure model in several aspects. In the former, the initial consists only of one consonant, excluding the glide, and the final—that is, everything after the initial consonant—is not the poetic rhyming unit which excludes the prenuclear glide; whereas in the latter, the onset includes a glide and the rhyme–that is, everything after the onset—is the poetic rhyming unit.
The Chinese traditional IF syllable model is problematic in itself. First, the final is ternary branching, which is not compatible with the binary principle in contemporary linguistics. Second, the nucleus+coda, as the poetic rhyming unit, is not structured as a constituent. Accordingly, the question arises of whether Chinese syllables can be analyzed in the OR model.
Many attempts have been made to analyze the Chinese prenuclear glide in the light of current phonological theories, particularly in the OR model, based on phonetic and phonological data on Chinese. Some such studies have proposed that the prenuclear glide occupies the second position in the onset. Others have proposed that the glide is part of the nucleus. Yet, others regard the glide as a secondary articulation of the onset consonant, while still others think of the glide as an independent branch directly linking to the syllable node. Also, some have proposed an IF model with initial for shengmu and final for yunmu, which binarily branches into G(lide) and R(hyme), consisting of N(ucleus) and C(oda). What is more, some have put forward a universal X-bar model of the syllable to replace the OR model, based on a syntactic X-bar structure. So far, there has been no authoritative finding that has conclusively decided the Chinese syllable structure.
Moreover, the syllable is the cross-linguistic domain for phonotactics . The number of syllables in Chinese is very much smaller than that in many other languages mainly because of the complicated phonotactics of the language, which strictly govern the segmental relations within CGVX. In the X-bar syllable structure, the Chinese phonotactic constraints which configure segmental relations in the syllable domain mirror the theta rules which capture the configurational relations between specifier and head and head and complement in syntax. On the whole, analysis of the complexities of the Chinese syllable will shed light on the cross-linguistic representation of syllable structure, making a significant contribution to phonological typology in general.
Article
Computational Phonology
Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology.
The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.
Article
Connectionism in Linguistic Theory
Xiaowei Zhao
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Article
Morphologization and the Boundary Between Morphology and Phonology in the Romance Languages
Paul O'Neill
This article analyses, from a Romance perspective, the concept of morphologization and seeks to answer the following question: At what point does a historically proven phonological cause-and-effect relationship, whereby phonological feature X causes and determines phonological feature Y, cease to hold and the dephonologized Y element stand as a marker of some morphological distinction? The question is relevant to cases in which the original phonological conditioning element is still present and where it has disappeared. I explain that the answer to this question depends entirely on one’s conception of morphology and phonology. I argue against theories that adhere to the principle of lexical minimization and have a static conception of morphology, which is restricted to the concatenation of idiosyncratic morphemes. These theories are forced by their theoretical underpinnings, which are often ideological and not supported by robust empirical evidence, to explain morphologized phenomena as being synchronically derived by phonology. This approach comes at a huge cost: the model of phonology is endowed with powerful tools to make the analysis fit the theory and which ultimately diminishes the empirical content and plausibility of the phonological hypotheses; such approaches also constitute serious problems for language acquisition and learning. I argue for more dynamic and abstractive models of morphology, which do not impose strict restrictions on lexical storage. I ultimately view morphologization as an instance of morphologically conditioned phonology and uphold that there is no strict boundary between the phonology and morphology but both systems overlap and interact.
I analyze data and phonological explanations of metaphony in nouns and verbs in Italo-Romance, plural formation in Spanish and Portuguese, the distribution of velar allomorphy in the Italian and Spanish verbs, and the distribution of verbal stress in Surmiran Romansh and Spanish. With reference to the latter, the contribution dedicates significant space exploring the extent to which the diphthong/monophthong alternation in Spanish, and different types of allomorphy in Surmiran Romansh, is a matter of phonologically conditioned allomorphy or morphologically conditioned phonology.
Article
Morphology and Phonotactics
Maria Gouskova
Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins.
Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.
Article
Morphology in Japonic Languages
Taro Kageyama
Due to the agglutinative character, Japanese and Ryukyuan morphology is predominantly concatenative, applying to garden-variety word formation processes such as compounding, prefixation, suffixation, and inflection, though nonconcatenative morphology like clipping, blending, and reduplication is also available and sometimes interacts with concatenative word formation. The formal simplicity of the principal morphological devices is counterbalanced by their complex interaction with syntax and semantics as well as by the intricate interactions of four lexical strata (native, Sino-Japanese, foreign, and mimetic) with particular morphological processes. A wealth of phenomena is adduced that pertain to central issues in theories of morphology, such as the demarcation between words and phrases; the feasibility of the lexical integrity principle; the controversy over lexicalism and syntacticism; the distinction of morpheme-based and word-based morphology; the effects of the stage-level vs. individual-level distinction on the applicability of morphological rules; the interface of morphology, syntax, and semantics, and pragmatics; and the role of conjugation and inflection in predicate agglutination. In particular, the formation of compound and complex verbs/adjectives takes place in both lexical and syntactic structures, and the compound and complex predicates thus formed are further followed in syntax by suffixal predicates representing grammatical categories like causative, passive, negation, and politeness as well as inflections of tense and mood to form a long chain of predicate complexes. In addition, an array of morphological objects—bound root, word, clitic, nonindependent word or fuzoku-go, and (for Japanese) word plus—participate productively in word formation. The close association of morphology and syntax in Japonic languages thus demonstrates that morphological processes are spread over lexical and syntactic structures, whereas words are equipped with the distinct property of morphological integrity, which distinguishes them from syntactic phrases.
Article
Morphology and Tone
Irina Monich
Tone is indispensable for understanding many morphological systems of the world. Tonal phenomena may serve the morphological needs of a language in a variety of ways: segmental affixes may be specified for tone just like roots are; affixes may have purely tonal exponents that associate to segmental material provided by other morphemes; affixes may consist of tonal melodies, or “templates”; and tonal processes may apply in a way that is sensitive to morphosyntactic boundaries, delineating word-internal structure.
Two behaviors set tonal morphemes apart from other kinds of affixes: their mobility and their ability to apply phrasally (i.e., beyond the limits of the word). Both floating tones and tonal templates can apply to words that are either phonologically grouped with the word containing the tonal morpheme or syntactically dependent on it.
Problems generally associated with featural morphology are even more acute in regard to tonal morphology because of the vast diversity of tonal phenomena and the versatility with which the human language faculty puts pitch to use. The ambiguity associated with assigning a proper role to tone in a given morphological system necessitates placing further constraints on our theory of grammar. Perhaps more than any other morphological phenomena, grammatical tone exposes an inadequacy in our understanding both of the relationship between phonological and morphological modules of grammar and of the way that phonology may reference morphological information.
Article
The Motor Theory of Speech Perception
D. H. Whalen
The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.
Article
The Phonology of Compounds
Irene Vogel
A number of recent developments in phonological theory, beginning with The Sound Pattern of English, are particularly relevant to the phonology of compounds. They address both the phonological phenomena that apply to compound words and the phonological structures that are required as the domains of these phenomena: segmental and nonsegmental phenomena that operate within each member of a compound separately, as well as at the juncture between the members of compounds and throughout compounds as a whole. In all cases, what is crucial for the operation of the phonological phenomena of compounds is phonological structure, in terms of constituents of the Prosodic Hierarchy, as opposed to morphosyntactic structure. Specifically, only two phonological constituents are required, the Phonological Word, which provides the domain for phenomena that apply to the individual members of compounds and at their junctures, and a larger constituent that groups the members of compounds together. The nature of the latter is somewhat controversial, the main issue being whether or not there is a constituent in the Prosodic Hierarchy between the Phonological Word and the Phonological Phrase. When present, this constituent, the Composite Group (revised from the original Clitic Group), includes the members of compounds, as well as “stray” elements such as clitics and “Level 2” affixes. In its absence, compounds, and often the same “stray” elements, are analyzed as a type of Recursive Phonological Word, although crucially, the combinations of such element do not exhibit the same properties as the basic Phonological Word.
Article
Subtraction in Morphology
Stela Manova
Subtraction consists in shortening the shape of the word. It operates on morphological bases such as roots, stems, and words in word-formation and inflection. Cognitively, subtraction is the opposite of affixation, since the latter adds meaning and form (an overt affix) to roots, stems, or words, while the former adds meaning through subtraction of form. As subtraction and affixation work at the same level of grammar (morphology), they sometimes compete for the expression of the same semantics in the same language, for example, the pattern ‘science—scientist’ in German has derivations such as Physik ‘physics’—Physik-er ‘physicist’ and Astronom-ie ‘astronomy’—Astronom ‘astronomer’. Subtraction can delete phonemes and morphemes. In case of phoneme deletion, it is usually the final phoneme of a morphological base that is deleted and sometimes that phoneme can coincide with a morpheme.
Some analyses of subtraction(-like shortenings) rely not on morphological units (roots, stems, morphological words, affixes) but on the phonological word, which sometimes results in alternative definitions of subtraction. Additionally, syntax-based theories of morphology that do not recognize a morphological component of grammar and operate only with additive syntactic rules claim that subtraction actually consists in addition of defective phonological material that causes adjustments in phonology and leads to deletion of form on the surface. Other scholars postulate subtraction only if the deleted material does not coincide with an existing morpheme elsewhere in the language and if it does, they call the change backformation. There is also some controversy regarding what is a proper word-formation process and whether what is derived by subtraction is true word-formation or just marginal or extragrammatical morphology; that is, the question is whether shortenings such as hypocoristics and clippings should be treated on par with derivations such as, for example, the pattern of science-scientist.
Finally, research in subtraction also faces terminology issues in the sense that in the literature different labels have been used to refer to subtraction(-like) formations: minus feature, minus formation, disfixation, subtractive morph, (subtractive) truncation, backformation, or just shortening.
Article
Syntax–Phonology Interface
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Article
Topicalization in the Romance Languages
Silvio Cruschina
Topic and topicalization are key notions to understand processes of syntactic and prosodic readjustments in Romance. More specifically, topicalization refers to the syntactic mechanisms and constructions available in a language to mark an expression as the topic of the sentence. Despite the lack of a uniform definition of topic, often based on the notions of aboutness or givenness, significant advances have been made in Romance linguistics since the 1990s, yielding a better understanding of the topicalization constructions, their properties, and their grammatical correlates. Prosodically, topics are generally described as being contained in independent intonational phrases. The syntactic and pragmatic characteristics of a specific topicalization construction, by contrast, depend both on the form of resumption of the dislocated topic within the clause and on the types of topic (aboutness, given, and contrastive topics). We can thus distinguish between hanging topic (left dislocation) (HTLD) and clitic left-dislocation (ClLD) for sentence-initial topics, and clitic right-dislocation (ClRD) for sentence-final dislocated constituents. These topicalization constructions are available in most Romance languages, although variation may affect the type and the obligatory presence of the resumptive element.
Scholars working on topic and topicalization in the Romance languages have also addressed controversial issues such as the relation between topics and subjects, both grammatical (nominative) subjects and ‘oblique’ subjects such as dative experiencers and locative expressions. Moreover, topicalization has been discussed for medieval Romance, in conjunction with its alleged V2 syntactic status. Some topicalization constructions such as subject inversion, especially in the non-null subject Romance languages, and Resumptive Preposing may indeed be viewed as potential residues of medieval V2 property in contemporary Romance.