41-60 of 107 Results  for:

  • Phonetics/Phonology x
Clear all

Article

The Ibero-Romance-speaking Jews of medieval Christian Iberia were linguistically distinct from their non-Jewish neighbors primarily as a result of their language’s unique Hebrew-Aramaic component; preservations from older Jewish Greek, Latin, and Arabic; a tradition of translating sacred Hebrew and Aramaic texts into their language using archaisms and Hebrew-Aramaic rather than Hispanic syntax; and their Hebrew-letter writing system. With the expulsions from Iberia in the late 15th century, most of the Sephardim who continued to maintain their Iberian-origin language resettled in the Ottoman Empire, with smaller numbers in North Africa and Italy. Their forced migration, and perhaps a conscious choice, essentially disconnected the Sephardim from the Spanish language as it developed in Iberia and Latin America, causing their language—which they came to call laðino ‘Romance’, ʤuðezmo or ʤuðjó ‘Jewish, Judezmo’, and more recently (ʤudeo)espaɲol ‘Judeo-Spanish’—to appear archaic when compared with modern Spanish. In their new locales the Sephardim developed the Hispanic component of their language along independent lines, resulting in further differentiation from Spanish. Divergence was intensified through borrowing from contact languages of the Ottoman Empire such as Turkish, Greek, and South Slavic. Especially from the late 18th century, factors such as the colonializing interests of France, Italy, and Austro-Hungary in the region led to considerable influence of their languages on Judezmo. In the 19th century, the dismemberment of the Ottoman and Austro-Hungarian empires and their replacement by highly nationalistic states resulted in a massive language shift to the local languages; that factor, followed by large speech-population losses during World War II and immigration to countries stressing linguistic homogeneity, have in recent years made Judezmo an endangered language.

Article

Daniel Harbour

The Kiowa-Tanoan family is a small group of Native American languages of the Plains and pueblo Southwest. It comprises Kiowa, of the eponymous Plains tribe, and the pueblo-based Tanoan languages, Jemez (Towa), Tewa, and Northern and Southern Tiwa. These free-word-order languages display a number of typologically unusual characteristics that have rightly attracted attention within a range of subdisciplines and theories. One word of Taos (my construction based on Kontak and Kunkel’s work) illustrates. In tóm-múlu-wia ‘I gave him/her a drum,’ the verb wia ‘gave’ obligatorily incorporates its object, múlu ‘drum.’ The agreement prefix tóm encodes not only object number, but identities of agent and recipient as first and third singular, respectively, and this all in a single syllable. Moreover, the object number here is not singular, but “inverse”: singular for some nouns, plural for others (tóm-músi-wia only has the plural object reading ‘I gave him/her cats’). This article presents a comparative overview of the three areas just illustrated: from morphosemantics, inverse marking and noun class; from morphosyntax, super-rich fusional agreement; and from syntax, incorporation. The second of these also touches on aspects of morphophonology, the family’s three-tone system and its unusually heavy grammatical burden, and on further syntax, obligatory passives. Together, these provide a wide window on the grammatical wealth of this fascinating family.

Article

Young-mee Yu Cho

Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.

Article

As might be expected from the difficulty of traversing it, the Sahara Desert has been a fairly effective barrier to direct contact between its two edges; trans-Saharan language contact is limited to the borrowing of non-core vocabulary, minimal from south to north and mostly mediated by education from north to south. Its own inhabitants, however, are necessarily accustomed to travelling desert spaces, and contact between languages within the Sahara has often accordingly had a much greater impact. Several peripheral Arabic varieties of the Sahara retain morphology as well as vocabulary from the languages spoken by their speakers’ ancestors, in particular Berber in the southwest and Beja in the southeast; the same is true of at least one Saharan Hausa variety. The Berber languages of the northern Sahara have in turn been deeply affected by centuries of bilingualism in Arabic, borrowing core vocabulary and some aspects of morphology and syntax. The Northern Songhay languages of the central Sahara have been even more profoundly affected by a history of multilingualism and language shift involving Tuareg, Songhay, Arabic, and other Berber languages, much of which remains to be unraveled. These languages have borrowed so extensively that they retain barely a few hundred core words of Songhay vocabulary; those loans have not only introduced new morphology but in some cases replaced old morphology entirely. In the southeast, the spread of Arabic westward from the Nile Valley has created a spectrum of varieties with varying degrees of local influence; the Saharan ones remain almost entirely undescribed. Much work remains to be done throughout the region, not only on identifying and analyzing contact effects but even simply on describing the languages its inhabitants speak.

Article

Phonological learnability deals with the formal properties of phonological languages and grammars, which are combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: Beginning at a predetermined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a predetermined end state, where the grammar will produce or accept all and only the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words and so on—and in many cases, the grammar and the representations must be acquired at the same time. Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, i.e., in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging? Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps? Are learning biases advantageous or even crucial to success? In assessing phonological learnability, the analysist also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned, i.e., whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.

Article

Peter Gilles

This article provides an overview of the structure of the Luxembourgish language, the national language of the Grand Duchy of Luxembourg, which has developed from a Moselle Franconian dialect to an Ausbau language in the course of the 20th century. In the early 21st century, Luxembourgish serves several functions, mainly as a multifunctional spoken variety but also as a written language, which has acquired a medium level of language standardization. Because of the embedding into a complex multilingual situation with German and French, Luxembourgish is characterized by a high degree of language contact. As a Germanic language, Luxembourgish has developed its distinct grammatical features. In this article, the main aspects of phonetics and phonology (vowels, consonants, prosody, word stress), morphology (inflection of nouns, adjectives, articles and pronouns, partitive structures, prepositions, verbal system), and syntactic characteristics (complementizer agreement, word order in verbal clusters) are discussed. The lexicon is influenced to a certain degree by loanwords from French. Regarding language variation and change, recent surveys show that Luxembourgish is undergoing major changes affecting phonetics and phonology (reduction of regional pronunciations), the grammatical system (plural of nouns), and, especially, the lexical level (decrease of loans from French, increase of loans from German).

Article

Nora C. England

Mayan languages are spoken by over 5 million people in Guatemala, Mexico, Belize, and Honduras. There are around 30 different languages today, ranging in size from fairly large (about a million speakers) to very small (fewer than 30 speakers). All Mayan languages are endangered given that at least some children in some communities are not learning the language, and two languages have disappeared since European contact. Mayas developed the most elaborated and most widely attested writing system in the Americas (starting about 300 BC). The sounds of Mayan languages consist of a voiceless stop and affricate series with corresponding glottalized stops (either implosive and ejective) and affricates, glottal stop, voiceless fricatives (including h in some of them inherited from Proto-Maya), two to three nasals, three to four approximants, and a five vowel system with contrasting vowel length (or tense/lax distinctions) in most languages. Several languages have developed contrastive tone. The major word classes in Mayan languages include nouns, verbs, adjectives, positionals, and affect words. The difference between transitive verbs and intransitive verbs is rigidly maintained in most languages. They usually use the same aspect markers (but not always). Intransitive verbs only indicate their subjects while transitive verbs indicate both subjects and objects. Some languages have a set of status suffixes which is different for the two classes. Positionals are a root class whose most characteristic word form is a non-verbal predicate. Affect words indicate impressions of sounds, movements, and activities. Nouns have a number of different subclasses defined on the basis of characteristics when possessed, or the structure of compounds. Adjectives are formed from a small class of roots (under 50) and many derived forms from verbs and positionals. Predicate types are transitive, intransitive, and non-verbal. Non-verbal predicates are based on nouns, adjectives, positionals, numbers, demonstratives, and existential and locative particles. They are distinct from verbs in that they do not take the usual verbal aspect markers. Mayan languages are head marking and verb initial; most have VOA flexible order but some have VAO rigid order. They are morphologically ergative and also have at least some rules that show syntactic ergativity. The most common of these is a constraint on the extraction of subjects of transitive verbs (ergative) for focus and/or interrogation, negation, or relativization. In addition, some languages make a distinction between agentive and non-agentive intransitive verbs. Some also can be shown to use obviation and inverse as important organizing principles. Voice categories include passive, antipassive and agent focus, and an applicative with several different functions.

Article

Matthew K. Gordon

Metrical structure refers to the phonological representations capturing the prominence relationships between syllables, usually manifested phonetically as differences in levels of stress. There is considerable diversity in the range of stress systems found cross-linguistically, although attested patterns represent a small subset of those that are logically possible. Stress systems may be broadly divided into two groups, based on whether they are sensitive to the internal structure, or weight, of syllables or not, with further subdivisions based on the number of stresses per word and the location of those stresses. An ongoing debate in metrical stress theory concerns the role of constituency in characterizing stress patterns. Certain approaches capture stress directly in terms of a metrical grid in which more prominent syllables are associated with a greater number of grid marks than less prominent syllables. Others assume the foot as a constituent, where theories differ in the inventory of feet they assume. Support for foot-based theories of stress comes from segmental alternations that are explicable with reference to the foot but do not readily emerge in an apodal framework. Computational tools, increasingly, are being incorporated in the evaluation of phonological theories, including metrical stress theories. Computer-generated factorial typologies provide a rigorous means for determining the fit between the empirical coverage afforded by metrical theories and the typology of attested stress systems. Computational simulations also enable assessment of the learnability of metrical representations within different theories.

Article

Maria Gouskova

Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be stated without reference to morphology, but many of the more nuanced phonotactic generalizations do make use of morphosyntactic and lexical information. At the most basic level, many languages mark edges of words in some phonological way. Different phonotactic constraints hold of sounds that belong to the same morpheme as opposed to sounds that are separated by a morpheme boundary. Different phonotactic constraints may apply to morphemes of different types (such as roots versus affixes). There are also correlations between phonotactic shapes and following certain morphosyntactic and phonological rules, which may correlate to syntactic category, declension class, or etymological origins. Approaches to the interaction between phonotactics and morphology address two questions: (1) how to account for rules that are sensitive to morpheme boundaries and structure and (2) determining the status of phonotactic constraints associated with only some morphemes. Theories differ as to how much morphological information phonology is allowed to access. In some theories of phonology, any reference to the specific identities or subclasses of morphemes would exclude a rule from the domain of phonology proper. These rules are either part of the morphology or are not given the status of a rule at all. Other theories allow the phonological grammar to refer to detailed morphological and lexical information. Depending on the theory, phonotactic differences between morphemes may receive direct explanations or be seen as the residue of historical change and not something that constitutes grammatical knowledge in the speaker’s mind.

Article

Due to the agglutinative character, Japanese and Ryukyuan morphology is predominantly concatenative, applying to garden-variety word formation processes such as compounding, prefixation, suffixation, and inflection, though nonconcatenative morphology like clipping, blending, and reduplication is also available and sometimes interacts with concatenative word formation. The formal simplicity of the principal morphological devices is counterbalanced by their complex interaction with syntax and semantics as well as by the intricate interactions of four lexical strata (native, Sino-Japanese, foreign, and mimetic) with particular morphological processes. A wealth of phenomena is adduced that pertain to central issues in theories of morphology, such as the demarcation between words and phrases; the feasibility of the lexical integrity principle; the controversy over lexicalism and syntacticism; the distinction of morpheme-based and word-based morphology; the effects of the stage-level vs. individual-level distinction on the applicability of morphological rules; the interface of morphology, syntax, and semantics, and pragmatics; and the role of conjugation and inflection in predicate agglutination. In particular, the formation of compound and complex verbs/adjectives takes place in both lexical and syntactic structures, and the compound and complex predicates thus formed are further followed in syntax by suffixal predicates representing grammatical categories like causative, passive, negation, and politeness as well as inflections of tense and mood to form a long chain of predicate complexes. In addition, an array of morphological objects—bound root, word, clitic, nonindependent word or fuzoku-go, and (for Japanese) word plus—participate productively in word formation. The close association of morphology and syntax in Japonic languages thus demonstrates that morphological processes are spread over lexical and syntactic structures, whereas words are equipped with the distinct property of morphological integrity, which distinguishes them from syntactic phrases.

Article

Irina Monich

Tone is indispensable for understanding many morphological systems of the world. Tonal phenomena may serve the morphological needs of a language in a variety of ways: segmental affixes may be specified for tone just like roots are; affixes may have purely tonal exponents that associate to segmental material provided by other morphemes; affixes may consist of tonal melodies, or “templates”; and tonal processes may apply in a way that is sensitive to morphosyntactic boundaries, delineating word-internal structure. Two behaviors set tonal morphemes apart from other kinds of affixes: their mobility and their ability to apply phrasally (i.e., beyond the limits of the word). Both floating tones and tonal templates can apply to words that are either phonologically grouped with the word containing the tonal morpheme or syntactically dependent on it. Problems generally associated with featural morphology are even more acute in regard to tonal morphology because of the vast diversity of tonal phenomena and the versatility with which the human language faculty puts pitch to use. The ambiguity associated with assigning a proper role to tone in a given morphological system necessitates placing further constraints on our theory of grammar. Perhaps more than any other morphological phenomena, grammatical tone exposes an inadequacy in our understanding both of the relationship between phonological and morphological modules of grammar and of the way that phonology may reference morphological information.

Article

It has been an ongoing issue within generative linguistics how to properly analyze morpho-phonological processes. Morpho-phonological processes typically have exceptions, but nonetheless they are often productive. Such productive, but exceptionful, processes are difficult to analyze, since grammatical rules or constraints are normally invoked in the analysis of a productive pattern, whereas exceptions undermine the validity of the rules and constraints. In addition, productivity of a morpho-phonological process may be gradient, possibly reflecting the relative frequency of the relevant pattern in the lexicon. Simple lexical listing of exceptions as suppletive forms would not be sufficient to capture such gradient productivity of a process with exceptions. It is then necessary to posit grammatical rules or constraints even for exceptionful processes as long as they are at least in part productive. Moreover, the productivity can be correctly estimated only when the domain of rule application is correctly identified. Consequently, a morpho-phonological process cannot be properly analyzed unless we possess both the correct description of its application conditions and the appropriate stochastic grammatical mechanisms to capture its productivity. The same issues arise in the analysis of morpho-phonological processes in Korean, in particular, n-insertion, sai-siot, and vowel harmony. Those morpho-phonological processes have many exceptions and variations, which make them look quite irregular and unpredictable. However, they have at least a certain degree of productivity. Moreover, the variable application of each process is still systematic in that various factors, phonological, morphosyntactic, sociolinguistic, and processing, contribute to the overall probability of rule application. Crucially, grammatical rules and constraints, which have been proposed within generative linguistics to analyze categorical and exceptionless phenomena, may form an essential part of the analysis of the morpho-phonological processes in Korean. For an optimal analysis of each of the morpho-phonological processes in Korean, the correct conditions and domains for its application need to be identified first, and its exact productivity can then be measured. Finally, the appropriate stochastic grammatical mechanisms need to be found or developed in order to capture the measured productivity.

Article

The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.

Article

This article discusses several important phonological issues concerning subtractive processes in morphology. First, this article addresses the scope of subtractive processes that linguistic theories should be concerned with. Many subtractive processes fall in the realm of grammatical theories. Subsequently, previous processual and affixal approaches to subtractive morphology and nonconcatenative allomorphy are reviewed. Then, theoretical restrictiveness is taken up. Proponents of the affixal view often claim that it is more restrictive than the processual view, but their argument is not convincing. We do not know enough to discuss theoretical restrictiveness. Finally, earlier analyses of subtractive morphology in parallel and serial Optimality Theory are reviewed. We have not accomplished enough in this respect, so no conclusive choice of parallelism or serialism is possible at present. As a whole, there are too many unsettled matters to conclude about the nature of subtractive processes in morphology.

Article

Howard Lasnik and Terje Lohndal

Noam Avram Chomsky is one of the central figures of modern linguistics. He was born in Philadelphia, Pennsylvania on December 7, 1928. In 1945, Chomsky enrolled in the University of Pennsylvania, where he met Zellig Harris (1909–1992), a leading Structuralist, through their shared political interests. His first encounter with Harris’s work was when he proof-read Harris’s book Methods in Structural Linguistics, published in 1951 but completed already in 1947. Chomsky grew dissatisfied with Structuralism and started to develop his own major idea that syntax and phonology are in part matters of abstract representations. This was soon combined with a psychobiological view of language as a unique part of the mind/brain. Chomsky spent 1951–1955 as a Junior Fellow of the Harvard Society of Fellows, after which he joined the faculty at MIT under the sponsorship of Morris Halle. He was promoted to full professor of Foreign Languages and Linguistics in 1961, appointed Ferrari Ward Professor of Linguistics in 1966, and Institute Professor in 1976, retiring in 2002. Chomsky is still remarkably active, publishing, teaching, and lecturing across the world. In 1967, both the University of Chicago and the University of London awarded him honorary degrees, and since then he has been the recipient of scores of honors and awards. In 1988, he was awarded the Kyoto Prize in basic science, created in 1984 in order to recognize work in areas not included among the Nobel Prizes. These honors are all a testimony to Chomsky’s influence and impact in linguistics and cognitive science more generally over the past 60 years. His contributions have of course also been heavily criticized, but nevertheless remain crucial to investigations of language. Chomsky’s work has always centered around the same basic questions and assumptions, especially that human language is an inherent property of the human mind. The technical part of his research has continuously been revised and updated. In the 1960s phrase structure grammars were developed into what is known as the Standard Theory, which transformed into the Extended Standard Theory and X-bar theory in the 1970s. A major transition occurred at the end of the 1970s, when the Principles and Parameters Theory emerged. This theory provides a new understanding of the human language faculty, focusing on the invariant principles common to all human languages and the points of variation known as parameters. Its recent variant, the Minimalist Program, pushes the approach even further in asking why grammars are structured the way they are.

Article

Shinsho Miyara

Within the Ryukyuan branch of the Japonic family of languages, present-day Okinawan retains numerous regional variants which have evolved for over a thousand years in the Ryukyuan Archipelago. Okinawan is one of the six Ryukyuan languages that UNESCO identified as endangered. One of the theoretically fascinating features is that there is substantial evidence for establishing a high central phonemic vowel in Okinawan although there is currently no overt surface [ï]. Moreover, the word-initial glottal stop [ʔ] in Okinawan is more salient than that in Japanese when followed by vowels, enabling recognition that all Okinawan words are consonant-initial. Except for a few particles, all Okinawan words are composed of two or more morae. Suffixation or vowel lengthening (on nouns, verbs, and adjectives) provides the means for signifying persons as well as things related to human consumption or production. Every finite verb in Okinawan terminates with a mood element. Okinawan exhibits a complex interplay of mood or negative elements and focusing particles. Evidentiality is also realized as an obligatory verbal suffix.

Article

Old English (OE) is a cover term for a variety of dialects spoken in Britain ca. 5th–11th century. Most of the manuscripts on which the descriptive handbook tradition relies date from the latter part of the period. These late OE manuscripts were produced in Wessex and show a degree of uniformity interrupted by the Norman Conquest of 1066. Middle English (ME) covers roughly 1050–1500. The early part of the period, ca. pre-1350, is marked by great diversity of scribal practices; it is only in late ME that some degree of orthographic regularity can be observed. The consonantal system of OE differs from the Modern English system. Consonantal length was contrastive, there were no affricates, no voicing contrast for the fricatives [f, θ, s], no phonemic velar nasal [ŋ], and [h-] loss was under way. In the vocalic system, OE shows changes that identify it as a separate branch of Germanic: Proto-Germanic (PrG) ē 1 > OE ǣ/ē, PrG ai > OE ā, PrG au > OE ēa. The non-low short vowels of OE are reconstructed as non-peripheral, differing from the corresponding long vowels both in quality and quantity. The so called “short” diphthongs usually posited for OE suggest a case for which a strict binary taxonomy is inapplicable to the data. The OE long vowels and diphthongs were unstable, producing a number of important mergers including /iː - yː/, /eː - eø/, /ɛː - ɛə/. In addition to shifts in height and frontness, the stressed vowels were subject to a series of quantity adjustments that resulted in increased predictability of vowel length. The changes that jointly contribute to this are homorganic cluster lengthening, ME open syllable lengthening, pre-consonantal and trisyllabic shortening. The final unstressed vowels of ME were gradually lost, resulting in the adoption of <-e># as a diacritic marker for vowel length. Stress-assignment was based on a combination of morphological and prosodic criteria: root-initial stress was obligatory irrespective of syllable weight, while affixal stress was also sensitive to weight. Verse evidence allows the reconstruction of left-prominent compound stress; there is also some early evidence for the formation of clitic groups. Reconstruction of patterns on higher prosodic levels—phrasal and intonational contours—is hampered by lack of testable evidence.

Article

Bjarke Frellesvig

Old and Middle Japanese are the pre-modern periods of the attested history of the Japanese language. Old Japanese (OJ) is largely the language of the 8th century, with a modest, but still significant number of written sources, most of which is poetry. Middle Japanese is divided into two distinct periods, Early Middle Japanese (EMJ, 800–1200) and Late Middle Japanese (LMJ, 1200–1600). EMJ saw most of the significant sound changes that took place in the language, as well as profound influence from Chinese, whereas most grammatical changes took place between the end of EMJ and the end of LMJ. By the end of LMJ, the Japanese language had reached a form that is not significantly different from present-day Japanese. OJ phonology was simple, both in terms of phoneme inventory and syllable structure, with a total of only 88 different syllables. In EMJ, the language became quantity sensitive, with the introduction of a long versus short syllables. OJ and EMJ had obligatory verb inflection for a number of modal and syntactic categories (including an important distinction between a conclusive and an (ad)nominalizing form), whereas the expression of aspect and tense was optional. Through late EMJ and LMJ this system changed completely to one without nominalizing inflection, but obligatory inflection for tense. The morphological pronominal system of OJ was lost in EMJ, which developed a range of lexical and lexically based terms of speaker and hearer reference. OJ had a two-way (speaker–nonspeaker) demonstrative system, which in EMJ was replaced by a three-way (proximal–mesial–distal) system. OJ had a system of differential object marking, based on specificity, as well as a word order rule that placed accusative marked objects before most subjects; both of these features were lost in EMJ. OJ and EMJ had genitive subject marking in subordinate clauses and in focused, interrogative and exclamative main clauses, but no case marking of subjects in declarative, optative, or imperative main clauses and no nominative marker. Through LMJ genitive subject marking was gradually circumscribed and a nominative case particle was acquired which could mark subjects in all types of clauses. OJ had a well-developed system of complex predicates, in which two verbs jointly formed the predicate of a single clause, which is the source of the LMJ and NJ (Modern Japanese) verb–verb compound complex predicates. OJ and EMJ also had mono-clausal focus constructions that functionally were similar to clefts in English; these constructions were lost in LMJ.

Article

Adrian P. Simpson and Melanie Weirich

Speech carries a wealth of information about the speaker aside from any verbal message ranging from emotional state (sad, happy, bored, etc.) to illness (e.g., cold). Central features are a speaker’s gender and their sexual orientation. In part this is an inevitable product of differences in speakers’ anatomical dimensions, for example on average males have lower pitched voices than females due to longer, thicker vocal cords that vibrate more slowly. Arguably much more information has been learned by a speaker as they construct their gender or identify with a particular sexual orientation. Differences in speech already begin in young children, before any marked gender-related anatomical differences develop, emphasizing the importance of behavioral patterns. Gender, gender identity, and sexual orientation are encoded in speech in a range of different phonetic parameters relating to both phonation (activity of the vocal folds) and articulation (dimensions and configuration of the supraglottal cavities), as well as the use of pitch patterns and differences in voice quality (the way in which the vocal folds vibrate). Differences in the size and configuration of the supraglottal cavities give rise to differences in the size of the acoustic vowel space as well as subtle differences in the production of individual sounds, such as the sibilant [s]. Furthermore, significant and systematic gender-specific differences have been found in the average duration of utterances and individual sounds, which in turn have been found to have a complex relationship to the perception of tempo.

Article

D. H. Whalen

Phonetics is the branch of linguistics that deals with the physical realization of meaningful distinctions in spoken language. Phoneticians study the anatomy and physics of sound generation, acoustic properties of the sounds of the world’s languages, the features of the signal that listeners use to perceive the message, and the brain mechanisms involved in both production and perception. Therefore, phonetics connects most directly to phonology and psycholinguistics, but it also engages a range of disciplines that are not unique to linguistics, including acoustics, physiology, biomechanics, hearing, evolution, and many others. Early theorists assumed that phonetic implementation of phonological features was universal, but it has become clear that languages differ in their phonetic spaces for phonological elements, with systematic differences in acoustics and articulation. Such language-specific details place phonetics solidly in the domain of linguistics; any complete description of a language must include its specific phonetic realization patterns. The description of what phonetic realizations are possible in human language continues to expand as more languages are described; many of the under-documented languages are endangered, lending urgency to the phonetic study of the world’s languages. Phonetic analysis can consist of transcription, acoustic analysis, measurement of speech articulators, and perceptual tests, with recent advances in brain imaging adding detail at the level of neural control and processing. Because of its dual nature as a component of a linguistic system and a set of actions in the physical world, phonetics has connections to many other branches of linguistics, including not only phonology but syntax, semantics, sociolinguistics, and clinical linguistics as well. Speech perception has been shown to integrate information from both vision and tactile sensation, indicating an embodied system. Sign language, though primarily visual, has adopted the term “phonetics” to represent the realization component, highlighting the linguistic nature both of phonetics and of sign language. Such diversity offers many avenues for studying phonetics, but it presents challenges to forming a comprehensive account of any language’s phonetic system.