Linguistic change not only affects the lexicon and the phonology of words, it also operates on the grammar of a language. In this context, grammaticalization is concerned with the development of lexical items into markers of grammatical categories or, more generally, with the development of markers used for procedural cueing of abstract relationships out of linguistic items with concrete referential meaning. A well-known example is the English verb go in its function of a future marker, as in She is going to visit her friend. Phenomena like these are very frequent across the world’s languages and across many different domains of grammatical categories. In the last 50 years, research on grammaticalization has come up with a plethora of (a) generalizations, (b) models of how grammaticalization works, and (c) methodological refinements.
On (a): Processes of grammaticalization develop gradually, step by step, and the sequence of the individual stages follows certain clines as they have been generalized from cross-linguistic comparison (unidirectionality). Even though there are counterexamples that go against the directionality of various clines, their number seems smaller than assumed in the late 1990s.
On (b): Models or scenarios of grammaticalization integrate various factors. Depending on the theoretical background, grammaticalization and its results are motivated either by the competing motivations of economy vs. iconicity/explicitness in functional typology or by a change from movement to merger in the minimalist program. Pragmatic inference is of central importance for initiating processes of grammaticalization (and maybe also at later stages), and it activates mechanisms like reanalysis and analogy, whose status is controversial in the literature. Finally, grammaticalization does not only work within individual languages/varieties, it also operates across languages. In situations of contact, the existence of a certain grammatical category may induce grammaticalization in another language.
On (c): Even though it is hard to measure degrees of grammaticalization in terms of absolute and exact figures, it is possible to determine relative degrees of grammaticalization in terms of the autonomy of linguistic signs. Moreover, more recent research has come up with criteria for distinguishing grammaticalization and lexicalization (defined as the loss of productivity, transparency, and/or compositionality of former productive, transparent, and compositional structures).
In spite of these findings, there are still quite a number of questions that need further research. Two questions to be discussed address basic issues concerning the overall properties of grammaticalization. (1) What is the relation between constructions and grammaticalization? In the more traditional view, constructions are seen as the syntactic framework within which linguistic items are grammaticalized. In more recent approaches based on construction grammar, constructions are defined as combinations of form and meaning. Thus, grammaticalization can be seen in the light of constructionalization, i.e., the creation of new combinations of form and meaning. Even though constructionalization covers many apects of grammaticalization, it does not exhaustively cover the domain of grammaticalization. (2) Is grammaticalization cross-linguistically homogeneous, or is there a certain range of variation? There is evidence from East and mainland Southeast Asia that there is cross-linguistic variation to some extent.
Irit Meir and Oksana Tkachman
Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry.
There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world.
Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.
Kimi Akita and Mark Dingemanse
Ideophones, also termed mimetics or expressives, are marked words that depict sensory imagery. They are found in many of the world’s languages, and sizable lexical classes of ideophones are particularly well-documented in the languages of Asia, Africa, and the Americas. Ideophones are not limited to onomatopoeia like meow and smack but cover a wide range of sensory domains, such as manner of motion (e.g., plisti plasta ‘splish-splash’ in Basque), texture (e.g., tsaklii ‘rough’ in Ewe), and psychological states (e.g., wakuwaku ‘excited’ in Japanese). Across languages, ideophones stand out as marked words due to special phonotactics, expressive morphology including certain types of reduplication, and relative syntactic independence, in addition to production features like prosodic foregrounding and common co-occurrence with iconic gestures.
Three intertwined issues have been repeatedly debated in the century-long literature on ideophones. (a) Definition: Isolated descriptive traditions and cross-linguistic variation have sometimes obscured a typologically unified view of ideophones, but recent advances show the promise of a prototype definition of ideophones as conventionalized depictions in speech, with room for language-specific nuances. (b) Integration: The variable integration of ideophones across linguistic levels reveals an interaction between expressiveness and grammatical integration, and has important implications for how to conceive of dependencies between linguistic systems. (c) Iconicity: Ideophones form a natural laboratory for the study of iconic form-meaning associations in natural languages, and converging evidence from corpus and experimental studies suggests important developmental, evolutionary, and communicative advantages of ideophones.
The Kiowa-Tanoan family is a small group of Native American languages of the Plains and pueblo Southwest. It comprises Kiowa, of the eponymous Plains tribe, and the pueblo-based Tanoan languages, Jemez (Towa), Tewa, and Northern and Southern Tiwa. These free-word-order languages display a number of typologically unusual characteristics that have rightly attracted attention within a range of subdisciplines and theories.
One word of Taos (my construction based on Kontak and Kunkel’s work) illustrates. In tóm-múlu-wia ‘I gave him/her a drum,’ the verb wia ‘gave’ obligatorily incorporates its object, múlu ‘drum.’ The agreement prefix tóm encodes not only object number, but identities of agent and recipient as first and third singular, respectively, and this all in a single syllable. Moreover, the object number here is not singular, but “inverse”: singular for some nouns, plural for others (tóm-músi-wia only has the plural object reading ‘I gave him/her cats’).
This article presents a comparative overview of the three areas just illustrated: from morphosemantics, inverse marking and noun class; from morphosyntax, super-rich fusional agreement; and from syntax, incorporation. The second of these also touches on aspects of morphophonology, the family’s three-tone system and its unusually heavy grammatical burden, and on further syntax, obligatory passives. Together, these provide a wide window on the grammatical wealth of this fascinating family.
Young-mee Yu Cho
Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.
As might be expected from the difficulty of traversing it, the Sahara Desert has been a fairly effective barrier to direct contact between its two edges; trans-Saharan language contact is limited to the borrowing of non-core vocabulary, minimal from south to north and mostly mediated by education from north to south. Its own inhabitants, however, are necessarily accustomed to travelling desert spaces, and contact between languages within the Sahara has often accordingly had a much greater impact. Several peripheral Arabic varieties of the Sahara retain morphology as well as vocabulary from the languages spoken by their speakers’ ancestors, in particular Berber in the southwest and Beja in the southeast; the same is true of at least one Saharan Hausa variety. The Berber languages of the northern Sahara have in turn been deeply affected by centuries of bilingualism in Arabic, borrowing core vocabulary and some aspects of morphology and syntax. The Northern Songhay languages of the central Sahara have been even more profoundly affected by a history of multilingualism and language shift involving Tuareg, Songhay, Arabic, and other Berber languages, much of which remains to be unraveled. These languages have borrowed so extensively that they retain barely a few hundred core words of Songhay vocabulary; those loans have not only introduced new morphology but in some cases replaced old morphology entirely. In the southeast, the spread of Arabic westward from the Nile Valley has created a spectrum of varieties with varying degrees of local influence; the Saharan ones remain almost entirely undescribed. Much work remains to be done throughout the region, not only on identifying and analyzing contact effects but even simply on describing the languages its inhabitants speak.
Phonological learnability deals with the formal properties of phonological languages and grammars, which are combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: Beginning at a predetermined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a predetermined end state, where the grammar will produce or accept all and only the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words and so on—and in many cases, the grammar and the representations must be acquired at the same time.
Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, i.e., in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging? Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps? Are learning biases advantageous or even crucial to success?
In assessing phonological learnability, the analysist also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned, i.e., whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.
Nora C. England
Mayan languages are spoken by over 5 million people in Guatemala, Mexico, Belize, and Honduras. There are around 30 different languages today, ranging in size from fairly large (about a million speakers) to very small (fewer than 30 speakers). All Mayan languages are endangered given that at least some children in some communities are not learning the language, and two languages have disappeared since European contact. Mayas developed the most elaborated and most widely attested writing system in the Americas (starting about 300 BC).
The sounds of Mayan languages consist of a voiceless stop and affricate series with corresponding glottalized stops (either implosive and ejective) and affricates, glottal stop, voiceless fricatives (including h in some of them inherited from Proto-Maya), two to three nasals, three to four approximants, and a five vowel system with contrasting vowel length (or tense/lax distinctions) in most languages. Several languages have developed contrastive tone.
The major word classes in Mayan languages include nouns, verbs, adjectives, positionals, and affect words. The difference between transitive verbs and intransitive verbs is rigidly maintained in most languages. They usually use the same aspect markers (but not always). Intransitive verbs only indicate their subjects while transitive verbs indicate both subjects and objects. Some languages have a set of status suffixes which is different for the two classes. Positionals are a root class whose most characteristic word form is a non-verbal predicate. Affect words indicate impressions of sounds, movements, and activities. Nouns have a number of different subclasses defined on the basis of characteristics when possessed, or the structure of compounds. Adjectives are formed from a small class of roots (under 50) and many derived forms from verbs and positionals.
Predicate types are transitive, intransitive, and non-verbal. Non-verbal predicates are based on nouns, adjectives, positionals, numbers, demonstratives, and existential and locative particles. They are distinct from verbs in that they do not take the usual verbal aspect markers. Mayan languages are head marking and verb initial; most have VOA flexible order but some have VAO rigid order. They are morphologically ergative and also have at least some rules that show syntactic ergativity. The most common of these is a constraint on the extraction of subjects of transitive verbs (ergative) for focus and/or interrogation, negation, or relativization. In addition, some languages make a distinction between agentive and non-agentive intransitive verbs. Some also can be shown to use obviation and inverse as important organizing principles. Voice categories include passive, antipassive and agent focus, and an applicative with several different functions.
Matthew K. Gordon
Metrical structure refers to the phonological representations capturing the prominence relationships between syllables, usually manifested phonetically as differences in levels of stress. There is considerable diversity in the range of stress systems found cross-linguistically, although attested patterns represent a small subset of those that are logically possible. Stress systems may be broadly divided into two groups, based on whether they are sensitive to the internal structure, or weight, of syllables or not, with further subdivisions based on the number of stresses per word and the location of those stresses. An ongoing debate in metrical stress theory concerns the role of constituency in characterizing stress patterns. Certain approaches capture stress directly in terms of a metrical grid in which more prominent syllables are associated with a greater number of grid marks than less prominent syllables. Others assume the foot as a constituent, where theories differ in the inventory of feet they assume. Support for foot-based theories of stress comes from segmental alternations that are explicable with reference to the foot but do not readily emerge in an apodal framework. Computational tools, increasingly, are being incorporated in the evaluation of phonological theories, including metrical stress theories. Computer-generated factorial typologies provide a rigorous means for determining the fit between the empirical coverage afforded by metrical theories and the typology of attested stress systems. Computational simulations also enable assessment of the learnability of metrical representations within different theories.
Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins.
Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.
It has been an ongoing issue within generative linguistics how to properly analyze morpho-phonological processes. Morpho-phonological processes typically have exceptions, but nonetheless they are often productive. Such productive, but exceptionful, processes are difficult to analyze, since grammatical rules or constraints are normally invoked in the analysis of a productive pattern, whereas exceptions undermine the validity of the rules and constraints. In addition, productivity of a morpho-phonological process may be gradient, possibly reflecting the relative frequency of the relevant pattern in the lexicon. Simple lexical listing of exceptions as suppletive forms would not be sufficient to capture such gradient productivity of a process with exceptions. It is then necessary to posit grammatical rules or constraints even for exceptionful processes as long as they are at least in part productive. Moreover, the productivity can be correctly estimated only when the domain of rule application is correctly identified. Consequently, a morpho-phonological process cannot be properly analyzed unless we possess both the correct description of its application conditions and the appropriate stochastic grammatical mechanisms to capture its productivity.
The same issues arise in the analysis of morpho-phonological processes in Korean, in particular, n-insertion, sai-siot, and vowel harmony. Those morpho-phonological processes have many exceptions and variations, which make them look quite irregular and unpredictable. However, they have at least a certain degree of productivity. Moreover, the variable application of each process is still systematic in that various factors, phonological, morphosyntactic, sociolinguistic, and processing, contribute to the overall probability of rule application. Crucially, grammatical rules and constraints, which have been proposed within generative linguistics to analyze categorical and exceptionless phenomena, may form an essential part of the analysis of the morpho-phonological processes in Korean.
For an optimal analysis of each of the morpho-phonological processes in Korean, the correct conditions and domains for its application need to be identified first, and its exact productivity can then be measured. Finally, the appropriate stochastic grammatical mechanisms need to be found or developed in order to capture the measured productivity.
D. H. Whalen
The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.
Howard Lasnik and Terje Lohndal
Noam Avram Chomsky is one of the central figures of modern linguistics. He was born in Philadelphia, Pennsylvania on December 7, 1928. In 1945, Chomsky enrolled in the University of Pennsylvania, where he met Zellig Harris (1909–1992), a leading Structuralist, through their shared political interests. His first encounter with Harris’s work was when he proof-read Harris’s book Methods in Structural Linguistics, published in 1951 but completed already in 1947. Chomsky grew dissatisfied with Structuralism and started to develop his own major idea that syntax and phonology are in part matters of abstract representations. This was soon combined with a psychobiological view of language as a unique part of the mind/brain.
Chomsky spent 1951–1955 as a Junior Fellow of the Harvard Society of Fellows, after which he joined the faculty at MIT under the sponsorship of Morris Halle. He was promoted to full professor of Foreign Languages and Linguistics in 1961, appointed Ferrari Ward Professor of Linguistics in 1966, and Institute Professor in 1976, retiring in 2002. Chomsky is still remarkably active, publishing, teaching, and lecturing across the world.
In 1967, both the University of Chicago and the University of London awarded him honorary degrees, and since then he has been the recipient of scores of honors and awards. In 1988, he was awarded the Kyoto Prize in basic science, created in 1984 in order to recognize work in areas not included among the Nobel Prizes. These honors are all a testimony to Chomsky’s influence and impact in linguistics and cognitive science more generally over the past 60 years. His contributions have of course also been heavily criticized, but nevertheless remain crucial to investigations of language.
Chomsky’s work has always centered around the same basic questions and assumptions, especially that human language is an inherent property of the human mind. The technical part of his research has continuously been revised and updated. In the 1960s phrase structure grammars were developed into what is known as the Standard Theory, which transformed into the Extended Standard Theory and X-bar theory in the 1970s. A major transition occurred at the end of the 1970s, when the Principles and Parameters Theory emerged. This theory provides a new understanding of the human language faculty, focusing on the invariant principles common to all human languages and the points of variation known as parameters. Its recent variant, the Minimalist Program, pushes the approach even further in asking why grammars are structured the way they are.
Within the Ryukyuan branch of the Japonic family of languages, present-day Okinawan retains numerous regional variants which have evolved for over a thousand years in the Ryukyuan Archipelago. Okinawan is one of the six Ryukyuan languages that UNESCO identified as endangered. One of the theoretically fascinating features is that there is substantial evidence for establishing a high central phonemic vowel in Okinawan although there is currently no overt surface [ï]. Moreover, the word-initial glottal stop [ʔ] in Okinawan is more salient than that in Japanese when followed by vowels, enabling recognition that all Okinawan words are consonant-initial. Except for a few particles, all Okinawan words are composed of two or more morae. Suffixation or vowel lengthening (on nouns, verbs, and adjectives) provides the means for signifying persons as well as things related to human consumption or production. Every finite verb in Okinawan terminates with a mood element. Okinawan exhibits a complex interplay of mood or negative elements and focusing particles. Evidentiality is also realized as an obligatory verbal suffix.
Old English (OE) is a cover term for a variety of dialects spoken in Britain ca. 5th–11th century. Most of the manuscripts on which the descriptive handbook tradition relies date from the latter part of the period. These late OE manuscripts were produced in Wessex and show a degree of uniformity interrupted by the Norman Conquest of 1066. Middle English (ME) covers roughly 1050–1500. The early part of the period, ca. pre-1350, is marked by great diversity of scribal practices; it is only in late ME that some degree of orthographic regularity can be observed.
The consonantal system of OE differs from the Modern English system. Consonantal length was contrastive, there were no affricates, no voicing contrast for the fricatives [f, θ, s], no phonemic velar nasal [ŋ], and [h-] loss was under way. In the vocalic system, OE shows changes that identify it as a separate branch of Germanic: Proto-Germanic (PrG) ē 1 > OE ǣ/ē, PrG ai > OE ā, PrG au > OE ēa. The non-low short vowels of OE are reconstructed as non-peripheral, differing from the corresponding long vowels both in quality and quantity. The so called “short” diphthongs usually posited for OE suggest a case for which a strict binary taxonomy is inapplicable to the data. The OE long vowels and diphthongs were unstable, producing a number of important mergers including /iː - yː/, /eː - eø/, /ɛː - ɛə/. In addition to shifts in height and frontness, the stressed vowels were subject to a series of quantity adjustments that resulted in increased predictability of vowel length. The changes that jointly contribute to this are homorganic cluster lengthening, ME open syllable lengthening, pre-consonantal and trisyllabic shortening. The final unstressed vowels of ME were gradually lost, resulting in the adoption of <-e># as a diacritic marker for vowel length.
Stress-assignment was based on a combination of morphological and prosodic criteria: root-initial stress was obligatory irrespective of syllable weight, while affixal stress was also sensitive to weight. Verse evidence allows the reconstruction of left-prominent compound stress; there is also some early evidence for the formation of clitic groups. Reconstruction of patterns on higher prosodic levels—phrasal and intonational contours—is hampered by lack of testable evidence.
Old and Middle Japanese are the pre-modern periods of the attested history of the Japanese language. Old Japanese (OJ) is largely the language of the 8th century, with a modest, but still significant number of written sources, most of which is poetry. Middle Japanese is divided into two distinct periods, Early Middle Japanese (EMJ, 800–1200) and Late Middle Japanese (LMJ, 1200–1600). EMJ saw most of the significant sound changes that took place in the language, as well as profound influence from Chinese, whereas most grammatical changes took place between the end of EMJ and the end of LMJ. By the end of LMJ, the Japanese language had reached a form that is not significantly different from present-day Japanese.
OJ phonology was simple, both in terms of phoneme inventory and syllable structure, with a total of only 88 different syllables. In EMJ, the language became quantity sensitive, with the introduction of a long versus short syllables. OJ and EMJ had obligatory verb inflection for a number of modal and syntactic categories (including an important distinction between a conclusive and an (ad)nominalizing form), whereas the expression of aspect and tense was optional. Through late EMJ and LMJ this system changed completely to one without nominalizing inflection, but obligatory inflection for tense.
The morphological pronominal system of OJ was lost in EMJ, which developed a range of lexical and lexically based terms of speaker and hearer reference. OJ had a two-way (speaker–nonspeaker) demonstrative system, which in EMJ was replaced by a three-way (proximal–mesial–distal) system.
OJ had a system of differential object marking, based on specificity, as well as a word order rule that placed accusative marked objects before most subjects; both of these features were lost in EMJ. OJ and EMJ had genitive subject marking in subordinate clauses and in focused, interrogative and exclamative main clauses, but no case marking of subjects in declarative, optative, or imperative main clauses and no nominative marker. Through LMJ genitive subject marking was gradually circumscribed and a nominative case particle was acquired which could mark subjects in all types of clauses.
OJ had a well-developed system of complex predicates, in which two verbs jointly formed the predicate of a single clause, which is the source of the LMJ and NJ (Modern Japanese) verb–verb compound complex predicates. OJ and EMJ also had mono-clausal focus constructions that functionally were similar to clefts in English; these constructions were lost in LMJ.
D. H. Whalen
Phonetics is the branch of linguistics that deals with the physical realization of meaningful distinctions in spoken language. Phoneticians study the anatomy and physics of sound generation, acoustic properties of the sounds of the world’s languages, the features of the signal that listeners use to perceive the message, and the brain mechanisms involved in both production and perception. Therefore, phonetics connects most directly to phonology and psycholinguistics, but it also engages a range of disciplines that are not unique to linguistics, including acoustics, physiology, biomechanics, hearing, evolution, and many others. Early theorists assumed that phonetic implementation of phonological features was universal, but it has become clear that languages differ in their phonetic spaces for phonological elements, with systematic differences in acoustics and articulation. Such language-specific details place phonetics solidly in the domain of linguistics; any complete description of a language must include its specific phonetic realization patterns. The description of what phonetic realizations are possible in human language continues to expand as more languages are described; many of the under-documented languages are endangered, lending urgency to the phonetic study of the world’s languages.
Phonetic analysis can consist of transcription, acoustic analysis, measurement of speech articulators, and perceptual tests, with recent advances in brain imaging adding detail at the level of neural control and processing. Because of its dual nature as a component of a linguistic system and a set of actions in the physical world, phonetics has connections to many other branches of linguistics, including not only phonology but syntax, semantics, sociolinguistics, and clinical linguistics as well. Speech perception has been shown to integrate information from both vision and tactile sensation, indicating an embodied system. Sign language, though primarily visual, has adopted the term “phonetics” to represent the realization component, highlighting the linguistic nature both of phonetics and of sign language. Such diversity offers many avenues for studying phonetics, but it presents challenges to forming a comprehensive account of any language’s phonetic system.
Susan Rvachew and Abdulsalam Alhaidary
Babbling is made up of meaningless speechlike syllables called canonical syllables. Canonical syllables are characterized by the coordination of consonantal and vocalic elements in syllables that have speechlike timing, phonation, and resonance characteristics. Infants begin to babble on average at approximately seven months of age. Babbling continues in parallel with less mature noncanonical vocalizations that make up the majority of utterances through the first year. Babbling also continues in parallel with the emergence of meaningful speech during the second year. Regardless of the language that the infant is learning, most canonical syllables have a CV shape with the consonant being a labial or alveolar stop or nasal and the vowel most likely to be central or low- to mid-front in place (e.g., [bʌ], [da], [mæ]). Approximately 15% of canonical utterances consist of multisyllable strings; in other words, most babbled utterances contain only a single CV syllable. The onset of the canonical babbling stage is crucially dependent upon normal hearing, permitting access to language input and feedback of self-produced speech. Many studies have reported differences in the phonetic and acoustic characteristics of babble produced by infants learning different languages. These differences include the frequency with which certain consonants are produced, the location, size, and shape of the vowel space, and the rhythmic and intonation qualities of multisyllable babbles, in each case reflecting specificities of the input language. However, replications of these findings are rare and further research is required to better understand the learning mechanisms that underlie language specific acquisition of articulatory representations during the prelinguistic stage of vocal development.
Susanne Fuchs and Peter Birkholz
Consonants are a major class of sounds occurring in all human languages. Typologically, consonant inventories are richer than vowel inventories. Consonants have been classified according to four basic features. Airstream mechanism is one of these features and describes the direction of airflow in or out of the oral cavity. The outgoing airflow is further separated according to its origin, that is, air coming from the lungs (pulmonic) or the oral cavity (non-pulmonic). Consonants are also grouped according to their phonological voicing contrast, which can be manifested phonetically by the presence or absence of vocal fold oscillations during the oral closure/constriction phase and by the duration from an oral closure release to the onset of voicing. Place of articulation is the third feature and refers to the location at which a consonantal constriction or closure is produced in the vocal tract. Finally, manner of articulation reflects different timing and coordinated actions of the articulators closely tied to aerodynamic properties.
The function of the voice organ is basically the same in classical singing as in speech. However, loud orchestral accompaniment has necessitated the use of the voice in an economical way. As a consequence, the vowel sounds tend to deviate considerably from those in speech. Male voices cluster formant three, four, and five, so that a marked peak is produced in spectrum envelope near 3,000 Hz. This helps them to get heard through a loud orchestral accompaniment. They seem to achieve this effect by widening the lower pharynx, which makes the vowels more centralized than in speech. Singers often sing at fundamental frequencies higher than the normal first formant frequency of the vowel in the lyrics. In such cases they raise the first formant frequency so that it gets somewhat higher than the fundamental frequency. This is achieved by reducing the degree of vocal tract constriction or by widening the lip and jaw openings, constricting the vocal tract in the pharyngeal end and widening it in the mouth. These deviations from speech cause difficulties in vowel identification, particularly at high fundamental frequencies. Actually, vowel identification is almost impossible above 700 Hz (pitch F5).
Another great difference between vocal sound produced in speech and the classical singing tradition concerns female voices, which need to reduce the timbral differences between voice registers. Females normally speak in modal or chest register, and the transition to falsetto tends to happen somewhere above 350 Hz. The great timbral differences between these registers are avoided by establishing control over the register function, that is, over the vocal fold vibration characteristics, so that seamless transitions are achieved.
In many other respects, there are more or less close similarities between speech and singing. Thus, marking phrase structure, emphasizing important events, and emotional coloring are common principles, which may make vocal artists deviate considerably from the score’s nominal description of fundamental frequency and syllable duration.