Phonetics
Phonetics
- D. H. WhalenD. H. WhalenCity University of New York and Haskins Laboratories, Yale University
Summary
Phonetics is the branch of linguistics that deals with the physical realization of meaningful distinctions in spoken language. Phoneticians study the anatomy and physics of sound generation, acoustic properties of the sounds of the world’s languages, the features of the signal that listeners use to perceive the message, and the brain mechanisms involved in both production and perception. Therefore, phonetics connects most directly to phonology and psycholinguistics, but it also engages a range of disciplines that are not unique to linguistics, including acoustics, physiology, biomechanics, hearing, evolution, and many others. Early theorists assumed that phonetic implementation of phonological features was universal, but it has become clear that languages differ in their phonetic spaces for phonological elements, with systematic differences in acoustics and articulation. Such language-specific details place phonetics solidly in the domain of linguistics; any complete description of a language must include its specific phonetic realization patterns. The description of what phonetic realizations are possible in human language continues to expand as more languages are described; many of the under-documented languages are endangered, lending urgency to the phonetic study of the world’s languages.
Phonetic analysis can consist of transcription, acoustic analysis, measurement of speech articulators, and perceptual tests, with recent advances in brain imaging adding detail at the level of neural control and processing. Because of its dual nature as a component of a linguistic system and a set of actions in the physical world, phonetics has connections to many other branches of linguistics, including not only phonology but syntax, semantics, sociolinguistics, and clinical linguistics as well. Speech perception has been shown to integrate information from both vision and tactile sensation, indicating an embodied system. Sign language, though primarily visual, has adopted the term “phonetics” to represent the realization component, highlighting the linguistic nature both of phonetics and of sign language. Such diversity offers many avenues for studying phonetics, but it presents challenges to forming a comprehensive account of any language’s phonetic system.
Keywords
Subjects
- Phonetics/Phonology
1. History and Development of Phonetics
Much of phonetic structure is available to direct inspection or introspection, allowing a long tradition in phonetics (see also articles in Asher & Henderson, 1981). The first true phoneticians were the Indian grammarians of about the 8th or 7th century bce. In their works, called Pratiśãkhya, they organized the sounds of Sanskrit according to places of articulation, and they also described all the physiological gestures that were required in the articulation of each sound. Every writing system, even those not explicitly phonetic or phonological, includes elements of the phonetic systems of the languages denoted. Early Semitic writing (from Phoenician onward) primarily encoded consonants, while the Greek system added vowels explicitly. The Chinese writing system includes phonetic elements in many, if not most, characters (DeFrancis, 1989), and modern readers access phonology while reading Mandarin (Zhou & Marslen-Wilson, 1999). The Mayan orthography was based largely on syllables (Coe, 1992). All of this required some level of awareness of phonetics.
Attempts to describe phonetics universally are more recent in origin, and they fall into the two domains of transcription and measurement. For transcription, the main development was the creation of the International Phonetic Alphabet (IPA) (e.g., International Phonetic Association, 1989). Initiated in 1886 as a tool for improving language teaching and, relatedly, reading (Macmahon, 2009), the IPA was modified and extended both in terms of the languages covered and the theoretical underpinnings (Ladefoged, 1990). This system is intended to provide a symbol for every distinctive sound in the world’s languages. The first versions addressed languages familiar to the European scholars primarily responsible for its development, but new sounds were added as more languages were described. The 79 consonantal and 28 vowel characters can be modified by an array of diacritics, allowing greater or lesser detail in the transcription. There are diacritics for suprasegmentals, both prosodic and tonal, as well. Additions have been made for the description of pathological speech (Duckworth, Allen, Hardcastle, & Ball, 1990). It is often the case that transcriptions for two languages using the same symbol nonetheless have perceptible differences in realization. Although additional diacritics can be used in such cases, it is more often useful to ignore such differences for most analysis purposes. Despite some limitations, the IPA continues to be a valuable tool in the analysis of languages, language use, and language disorders throughout the world.
For measurement, there are two main signals to record, the acoustic and the articulatory. Although articulation is inherently more complex and difficult to capture completely, it was more accessible to early techniques than were the acoustics. Various ingenious devices were created by Abbé Rousselot (1897–1908) and E. W. Scripture (1902). Rousselot’s devices for measuring the velum (Figure 1) and the tongue (Figure 2) were not, unfortunately, terribly successful. Pliny Earl Goddard (1905) used more successful devices and was ambitious enough to take his equipment into the field to record dynamic air pressure and static palatographs of such languages as Hupa [ISO 639-3 code hup] and Chipewyan [ISO 639-3 code chp]. Despite these early successes, relatively little physiological work was done until the second half of the 20th century. Technological advances have made it possible to examine muscle activity, airflow, tongue-palate contact, and location and movement of the tongue and other articulators via electromagnetic articulometry, ultrasound, and real-time magnetic resonance imaging (see Huffman, 2016). These measurements have advanced our theories of speech production and have addressed both phonetic and phonological issues.

Figure 1. Device to measure the height of the velum. From Rousselot (1897–1908).

Figure 2. Device to measure the height of the tongue from external shape of the area under the chin. From Rousselot (1897–1908).
Acoustic recordings became possible with the Edison disks, but the ability to measure and analyze these recordings was much longer in coming. Some aspects of the signal could be somewhat reasonably rendered via flame recordings, in which photographs were taken of flames flickering in response to various frequencies (König, 1873). These records were of limited value, because of the limitations of the waveform itself and the difficulties of the recordings, including the time and expense of making them. Further, the ability to see the spectral properties in detail was greatly enhanced by the declassification (after World War II) of the spectrograph (Koenig, Dunn, & Lacy, 1946; Potter, Kopp, & Green, 1947). New methods of analysis are constantly being explored, with greater accuracy and refinement of data categories being the result.
Sound is the most obvious carrier of language (and is etymologically embedded in “phonetics”), and the recognition that vision also plays a role in understanding speech came relatively late. Not only do those with typical hearing use vision when confronted with noisy speech (Sumby & Pollack, 1954), they can even be misled by vision with speech that is clearly audible (McGurk & MacDonald, 1976). Although the lips and jaw are the most salient carriers of speech information, areas of the face outside the lip region co-vary with speech segments (Yehia, Kuratate, & Vatikiotis-Bateson, 2002). Audiovisual integration continues as an active area of research in phonetics.
Sign language, a modality largely devoid of sound, has also adopted the term “phonetics” to describe the system of realization of the message (Goldin-Meadow & Brentari, 2017; Goldstein, Whalen, & Best, 2006). Similarities between reduction of speech articulators and American Sign Language (ASL) indicate that both systems allow for (indeed, may require) reduction in articulation when content is relatively predictable (Tyrone & Mauk, 2010). There is evidence that unrelated sign languages use the same realization of telicity, that is, whether an action has an inherent (“telic”) endpoint (e.g., “decide”) or not (“atelic”, e.g., “think”) (Strickland et al., 2015). Phonetic constraints, such as maximum contrastiveness of hand shapes, has been explored in an emerging sign language, Al-Sayyid Bedouin Sign Language (Sandler, Aronoff, Meir, & Padden, 2011). As further studies are completed, we can expect to see more insights into the aspects of language realization that are shared across modalities, and to be challenged by those that differ.
2. Phonetics in Relation to Phonology
Just as phonetics describes the realization of words in a language, so phonology describes the patterns of elements that make meaningful distinctions in a language. The relation between the two has been, and continues to be, a topic for theoretical debate (e.g., Gouskova, Zsiga, & Boyer, 2011; Keating, 1988; Romero & Riera, 2015). Positions range from a strict separation in which the phonology completes its operations before the phonetics becomes involved (e.g., Chomsky & Halle, 1968) to a complete dissolution of the distinction (e.g., Flemming, 2001; Ohala, 1990). Many intermediate positions are proposed as well.
Regardless of the degree of separation, the early assumption that phonetic implementation was merely physical and universal (e.g., Chomsky & Halle, 1968; Halliday, 1961), which may invoke the mind-body problem (e.g., Fodor, 1981), has proven to be inadequate. Keating (1985) examined three phonetic effects—intrinsic vowel duration, extrinsic vowel duration, and voicing timing—and found that they were neither universal nor physiologically necessary. Further examples of language- and dialect-specific effects have been found in the fine-grained detail in Voice Onset Time (Cho & Ladefoged, 1999), realization of focus (Peters, Hanssen, & Gussenhoven, 2014), and even the positions of speech articulators before speaking (Gick, Wilson, Koch, & Cook, 2004). Whatever the interface between phonetics and phonology may be, there exist language-specific phonetic patterns, thus ensuring the place of phonetics within linguistics proper.
The overwhelming evidence for “language-specific phonetics” has prompted a reconsideration of the second traditionally assumed distinction between phonology and phonetics: phonetics is continuous, phonology is discrete. This issue has been raised relatively less often in discussions of the phonetics-phonology interface. An approach to phonology that has addressed the need to reconsider the purely representational phonological elements in a logical compliance with their physical realization is Articulatory Phonology, whose elements have been claimed to be available for “public” (phonetic) use and yet categorical for making linguistic distinctions (Goldstein & Fowler, 2003). Iskarous (2017) shows how dynamical systems analysis unites discrete phonological contrast and continuous phonetic movement into one non-dualistic description. Gafos and his colleagues provide the formal mechanisms that use such a system to address a range of phonological processes (Gafos, 2002; Gafos & Beňuš, 2006; Gafos, Roeser, Sotiropoulou, Hoole, & Zeroual, 2019). Relating categorical, meaningful distinctions to continuous physical realizations will continue to be developed and, one hopes, ultimately be resolved.
3. Phonetics in Relation to Other Aspects of Language
Phonetic research has had far-reaching effects, many of which are outlined in individual articles in this encyclopedia. Here are four issues of particular interest.
Perception: Until the advent of an easily manipulated acoustic signal, it was very difficult to determine which aspects of the speech signal are taken into account perceptually. The Pattern Playback (Cooper, 1953) was an early machine that allowed the synthesis of speech from acoustic parameters. The resulting acoustic patterns did not sound completely natural, but they elicited speech percepts that allowed discoveries to be made, ones that have been replicated in many other studies (cf. Shankweiler & Fowler, 2015). These findings have led to a sizable range of research in linguistics and psychology (see Beddor, 2017). Many studies of brain function also take these results as a starting point.
Acquisition: Learning to speak is natural for neurologically typical infants, with no formal instruction necessary for the process. Just how this process takes place depends on phonetic findings, so that the acoustic output of early productions can be compared with the target values in the adult language. Whether or not linguistic categories are “innate,” the development of links between what the learner hears and what s/he speaks is a matter of ongoing debate that would not be possible without phonetic analysis.
Much if not most of the world’s population is bi- or multilingual, and the phonetic effects of second language learning have received a great deal of attention (Flege, 2003). The phonetic character of a first language (L1) usually has a great influence on the production and perception of a second one (L2). The effects are smaller when L2 is acquired earlier in life than later, and there is a great deal of individual variability. Degree of L2 accent has been shown to be amenable to improvement via biofeedback (d’Apolito, Sisinni, Grimaldi, & Gili Fivela, 2017; Suemitsu, Dang, Ito, & Tiede, 2015).
Sociolinguistics: Phonetic variation within a language is a strong indicator of community membership (Campbell-Kibler, 2010; Foulkes & Docherty, 2006). From the biblical story of the shibboleth to modern everyday experience, speech indicates origin. Perception of an accent can thus lead to stereotypical judgements based on origin, such as assigning less intelligence to speakers who use “-in” rather than “-ing” (Campbell-Kibler, 2007). Accents can work in two directions at once, as when an African American dialect is simultaneously recognized as disfavored by mainstream society yet valued as a marker of social identity (Wolfram & Schilling, 2015, p. 238). The level of detail that is available to and used by speakers and listeners is massive, requiring large studies with many variables. This makes sociophonetics both exciting and challenging (Hay & Drager, 2007).
Speech therapy: Not every instance of language acquisition is a smooth one, and some individuals face challenges in speaking their language. The tools that are developed in phonetics help with assessment of the differences and, in some cases, provide a means of remediation. One particularly exciting development that depends on articulation rather than acoustics is the use of ultrasound biofeedback (using images of the speaker’s tongue) to improve production (e.g., Bernhardt, Gick, Bacsfalvi, & Ashdown, 2003; Preston et al., 2017).
Speech technology: Speech synthesis was an early goal of phonetic research (e.g., Holmes, Mattingly, & Shearme, 1964), and research continues to the present. Automatic speech recognition made use of phonetic results, though modern systems rely on more global treatments of the acoustic signal (e.g., Furui, Deng, Gales, Ney, & Tokuda, 2012). Man-machine interactions have benefited greatly from phonetic findings, helping to shape the modern world. Further advances may begin, once again, to make less use of machine learning and more of phonetic knowledge.
4. Future Directions
Phonetics as a field of study began with the exceptional discriminative power of the human ear, but recent developments have been increasingly tied to technology. As our ability to record and analyze speech increases, our use of larger and larger data sets increases as well. Many of those data sets consist of acoustic recordings, which are excellent but incomplete records for phonetic analysis. Greater attention is being paid to variability in the signal, both in terms of covariation (Chodroff & Wilson, 2017; Kawahara, 2017) and intrinsic lack of consistency (Tilsen, 2015; Whalen, Chen, Tiede, & Nam, 2018). Assessing variability depends on the accuracy of individual measurements, and our current automatic formant analyses are known to be inaccurate (Shadle, Nam, & Whalen, 2016). Future developments in this domain are needed, but current studies that use current techniques must be appropriately limited in their interpretation.
Large data sets are rare for physiological data, though there are some exceptions (Narayanan et al., 2014; Tiede, 2017; Westbury, 1994). Quantification of articulator movement is easier than in the past, but it remains challenging in both collection and analysis. Mathematical tools for image processing and pattern detection are being adapted to the problem, and the future understanding of speech production should be enhanced. Although many techniques are too demanding for some populations, ultrasound has been found to allow investigations of young children (Noiray, Abakarova, Rubertus, Krüger, & Tiede, 2018), speakers in remote areas (Gick, Bird, & Wilson, 2005), and disordered populations (Preston et al., 2017). Thus the amount of data and the range of populations that can be measured can be expected to increase significantly in the coming years.
Understanding the brain mechanisms that underlie the phonetic effects being studied by other means will continue to expand. Improvements in the specificity of brain imaging techniques will allow narrower questions to be addressed. Techniques such as electrocorticographic (ECoG) signals (Hill et al., 2012), functional near-infrared spectroscopy (Yücel, Selb, Huppert, Franceschini, & Boas, 2017), and the combination of multiple modalities will allow more direct assessments of phonetic control in production and effects in perception. As with other levels of linguistic structure, theories will be both challenged and enhanced by evidence of brain activation in response to language.
Addressing more data allows a deeper investigation of the speech process, and technological advances will continue to play a major role. The study does, ultimately, return to the human perception and production ability, as each newly born speaker/hearer begins to acquire speech and the language it makes possible.
Further Reading
- Fant, G. (1960). Acoustic theory of speech production. The Hague, The Netherlands: Mouton.
- Hardcastle, W. J., & Hewlett, N. (Eds.). (1999). Coarticulation models in recent speech production theories. Cambridge, UK: Cambridge University Press.
- Ladefoged, P. (2001). A course in phonetics (4th ed.). Fort Worth, TX: Harcourt College Publishers.
- Ladefoged, P., & Maddieson, I. (1996). The sounds of the world’s languages. Oxford, UK: Blackwell.
- Liberman, A. M. (1996). Speech: A special code. Cambridge, MA: MIT Press.
- Lisker, L., & Abramson, A. S. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384–422. doi:10.1080/00437956.1964.11659830
- Ohala, J. J. (1981). The listener as a source of sound change. In M. F. Miller (Ed.), Papers from the parasession on language behavior (pp. 178–203). Chicago, IL: Chicago Linguistic Association.
- Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175–184.
- Stevens, K. N. (1998). Acoustic phonetics. Cambridge, MA: MIT Press.
References
- Asher, R. E., & Henderson, J. A. (Eds.). (1981). Towards a history of phonetics. Edinburgh, UK: Edinburgh University Press.
- Beddor, P. S. (2017). Speech perception in phonetics. In M. Aronoff (Ed.), Oxford research encyclopedia of linguistics. Oxford University Press. doi:10.1093/acrefore/9780199384655.013.62
- Bernhardt, B. M., Gick, B., Bacsfalvi, P., & Ashdown, J. (2003). Speech habilitation of hard of hearing adolescents using electropalatography and ultrasound as evaluated by trained listeners. Clinical Linguistics and Phonetics, 17, 199–216.
- Campbell-Kibler, K. (2007). Accent, (ing), and the social logic of listener perceptions. American Speech, 82(1), 32–64. doi:10.1215/00031283-2007-002
- Campbell-Kibler, K. (2010). Sociolinguistics and perception. Language and Linguistics Compass, 4(6), 377–389. doi:10.1111/j.1749-818X.2010.00201.x
- Cho, T., & Ladefoged, P. (1999). Variation and universals in VOT: Evidence from 18 languages. Journal of Phonetics, 27, 207–229.
- Chodroff, E., & Wilson, C. (2017). Structure in talker-specific phonetic realization: Covariation of stop consonant VOT in American English. Journal of Phonetics, 61, 30–47. doi:10.1016/j.wocn.2017.01.001
- Chomsky, N., & Halle, M. (1968). The sound pattern of English. New York, NY: Harper and Row.
- Coe, M. D. (1992). Breaking the Maya code. London, UK: Thames and Hudson.
- Cooper, F. S. (1953). Some instrumental aids to research on speech. In A. A. Hill (Ed.), Fourth Annual Round Table Meeting on Linguistics and Language Teaching (pp. 46–53). Washington, DC: Georgetown University.
- d’Apolito, I. S., Sisinni, B., Grimaldi, M., & Gili Fivela, B. (2017). Perceptual and ultrasound articulatory training effects on English L2 vowels production by Italian learners. International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 11(8), 2159–2167.
- DeFrancis, J. (1989). Visible speech: The diverse oneness of writing systems. Honolulu: University of Hawa’ii Press.
- Duckworth, M., Allen, G., Hardcastle, W., & Ball, M. (1990). Extensions to the International Phonetic Alphabet for the transcription of atypical speech. Clinical Linguistics and Phonetics, 4, 273–280. doi:10.3109/02699209008985489
- Flege, J. E. (2003). Assessing constraints on second-language segmental production and perception. In N. O. Schiller & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 319–355). Berlin, Germany: Mouton de Gruyter.
- Flemming, E. (2001). Scalar and categorical phenomena in a unified model of phonetics and phonology. Phonology, 18, 7–44.
- Fodor, J. A. (1981). The mind-body problem. Scientific American, 244, 114–123.
- Foulkes, P., & Docherty, G. (2006). The social life of phonetics and phonology. Journal of Phonetics, 34, 409–438. doi:10.1016/j.wocn.2005.08.002
- Furui, S., Deng, L., Gales, M., Ney, H., & Tokuda, K. (2012). Fundamental technologies in modern speech recognition. IEEE Signal Processing Magazine, 29(6), 16–17.
- Gafos, A. I. (2002). A grammar of gestural coordination. Natural Language and Linguistic Theory, 20, 269–337.
- Gafos, A. I., & Beňuš, Š. (2006). Dynamics of phonological cognition. Cognitive Science, 30, 905–943. doi:10.1207/s15516709cog0000_80
- Gafos, A. I., Roeser, J., Sotiropoulou, S., Hoole, P., & Zeroual, C. (2019). Structure in mind, structure in vocal tract. Natural Language and Linguistic Theory. doi:10.1007/s11049-019-09445-y
- Gick, B., Bird, S., & Wilson, I. (2005). Techniques for field application of lingual ultrasound imaging. Clinical Linguistics and Phonetics, 19, 503–514.
- Gick, B., Wilson, I., Koch, K., & Cook, C. (2004). Language-specific articulatory settings: Evidence from inter-utterance rest position. Phonetica, 61, 220–233.
- Goddard, P. E. (1905). Mechanical aids to the study and recording of language. American Anthropologist, 7, 613–619. doi:10.1525/aa.1905.7.4.02a00050
- Goldin-Meadow, S., & Brentari, D. (2017). Gesture, sign, and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences, 40, e46. doi:10.1017/S0140525X15001247
- Goldstein, L. M., & Fowler, C. A. (2003). Articulatory phonology: A phonology for public language use. In N. Schiller & A. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 159–207). Berlin, Germany: Mouton de Gruyter.
- Goldstein, L. M., Whalen, D. H., & Best, C. T. (Eds.). (2006). Papers in laboratory phonology 8. Berlin, Germany: Mouton de Gruyter.
- Gouskova, M., Zsiga, E., & Boyer, O. T. (2011). Grounded constraints and the consonants of Setswana. Lingua, 121(15), 2120–2152. doi:10.1016/j.lingua.2011.09.003
- Halliday, M. A. K. (1961). Categories of the theory of grammar. Word, 17, 241–292. doi:10.1080/00437956.1961.11659756
- Hay, J., & Drager, K. (2007). Sociophonetics. Annual Review of Anthropology, 36(1), 89–103. doi:10.1146/annurev.anthro.34.081804.120633
- Hill, N. J., Gupta, D., Brunner, P., Gunduz, A., Adamo, M. A., Ritaccio, A., & Schalk, G. (2012). Recording human electrocorticographic (ECoG) signals for neuroscientific research and real-time functional cortical mapping. Journal of Visualized Experiments (64), 3993. doi:10.3791/3993
- Holmes, J. N., Mattingly, I. G., & Shearme, J. N. (1964). Speech synthesis by rule. Language and Speech, 7(3), 127–143.
- Huffman, M. K. (2016). Articulatory phonetics. In M. Aronoff (Ed.), Oxford Research Encyclopedia of Linguistics. Oxford University Press.
- International Phonetic Association. (1989). Report on the 1989 Kiel Convention. Journal of the International Phonetic Association, 19, 67–80.
- Iskarous, K. (2017). The relation between the continuous and the discrete: A note on the first principles of speech dynamics. Journal of Phonetics, 64, 8–20. doi:10.1016/j.wocn.2017.05.003
- Kawahara, S. (2017). Durational compensation within a CV mora in spontaneous Japanese: Evidence from the Corpus of Spontaneous Japanese. Journal of the Acoustical Society of America, 142, EL143–EL149. doi:10.1121/1.4994674
- Keating, P. A. (1985). Universal phonetics and the organization of grammars. In V. A. Fromkin (Ed.), Phonetic linguistics: Essays in honor of Peter Ladefoged (pp. 115–132). New York, NY: Academic Press.
- Keating, P. A. (1988). The phonology-phonetics interface. In F. Newmeyer (Ed.), Linguistics: The Cambridge survey: Vol. 1. Grammatical theory (pp. 281–302). Cambridge, UK: Cambridge University Press.
- Koenig, W., Dunn, H. K., & Lacy, L. Y. (1946). The sound spectrograph. Journal of the Acoustical Society of America, 18, 19–49.
- König, R. (1873). I. On manometric flames. London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 45(297), 1–18.
- Ladefoged, P. (1990). The Revised International Phonetic Alphabet. Language, 66, 550–552. doi:10.2307/414611
- Macmahon, M. K. C. (2009). The International Phonetic Association: The first 100 years. Journal of the International Phonetic Association, 16, 30–38. doi:10.1017/S002510030000308X
- McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.
- Narayanan, S., Toutios, A., Ramanarayanan, V., Lammert, A., Kim, J., Lee, S., . . . Proctor, M. (2014). Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC). Journal of the Acoustical Society of America, 136, 1307–1311. doi:10.1121/1.4890284
- Noiray, A., Abakarova, D., Rubertus, E., Krüger, S., & Tiede, M. K. (2018). How do children organize their speech in the first years of life? Insight from ultrasound imaging. Journal of Speech, Language, and Hearing Research, 61, 1355–1368.
- Ohala, J. J. (1990). There is no interface between phonology and phonetics: A personal view. Journal of Phonetics, 18, 153–172.
- Peters, J., Hanssen, J., & Gussenhoven, C. (2014). The phonetic realization of focus in West Frisian, Low Saxon, High German, and three varieties of Dutch. Journal of Phonetics, 46, 185–209. doi:10.1016/j.wocn.2014.07.004
- Potter, R. K., Kopp, G. A., & Green, H. G. (1947). Visible speech. New York, NY: Van Nostrand.
- Preston, J. L., McAllister Byun, T., Boyce, S. E., Hamilton, S., Tiede, M. K., Phillips, E., . . . Whalen, D. H. (2017). Ultrasound images of the tongue: A tutorial for assessment and remediation of speech sound errors. Journal of Visualized Experiments, 119, e55123. doi:10.3791/55123
- Romero, J., & Riera, M. (Eds.). (2015). The Phonetics–Phonology Interface: Representations and methodologies. Amsterdam, The Netherlands: John Benjamins.
- Rousselot, P.‐J. (1897–1908). Principes de phonétique expérimentale. Paris, France: H. Welter.
- Sandler, W., Aronoff, M., Meir, I., & Padden, C. (2011). The gradual emergence of phonological form in a new language. Natural Language and Linguistic Theory, 29, 503–543. doi:10.1007/s11049-011-9128-2
- Scripture, E. W. (1902). The elements of experimental phonetics. New York, NY: Charles Scribner’s Sons.
- Shadle, C. H., Nam, H., & Whalen, D. H. (2016). Comparing measurement errors for formants in synthetic and natural vowels. Journal of the Acoustical Society of America, 139, 713–727.
- Shankweiler, D., & Fowler, C. A. (2015). Seeking a reading machine for the blind and discovering the speech code. History of Psychology, 18, 78–99.
- Strickland, B., Geraci, C., Chemla, E., Schlenker, P., Kelepir, M., & Pfau, R. (2015). Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences, 112(19), 5968–5973. doi:10.1073/pnas.1423080112
- Suemitsu, A., Dang, J., Ito, T., & Tiede, M. K. (2015). A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning. Journal of the Acoustical Society of America, 138, EL382–EL387. doi:10.1121/1.4931827
- Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26, 212–215.
- Tiede, M. K. (2017). Haskins_IEEE_Rate_Comparison_DB.
- Tilsen, S. (2015). Structured nonstationarity in articulatory timing. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 78, pp. 1–5). Glasgow, UK: University of Glasgow.
- Tyrone, M. E., & Mauk, C. E. (2010). Sign lowering and phonetic reduction in American Sign Language. Journal of Phonetics, 38, 317–328. doi:10.1016/j.wocn.2010.02.003
- Westbury, J. R. (1994). X-ray microbeam speech production database user’s handbook. Madison: Waisman Center, University of Wisconsin.
- Whalen, D. H., Chen, W.‐R., Tiede, M. K., & Nam, H. (2018). Variability of articulator positions and formants across nine English vowels. Journal of Phonetics, 68, 1–14.
- Wolfram, W., & Schilling, N. (2015). American English: Dialects and variation. Malden, MA: John Wiley & Sons.
- Yehia, H. C., Kuratate, T., & Vatikiotis-Bateson, E. S. (2002). Linking facial animation, head motion and speech acoustics. Journal of Phonetics, 30, 555–568. doi:10.1006/jpho.2002.0165
- Yücel, M. A., Selb, J. J., Huppert, T. J., Franceschini, M. A., & Boas, D. A. (2017). Functional Near Infrared Spectroscopy: Enabling routine functional brain imaging. Current Opinion in Biomedical Engineering, 4, 78–86. doi:10.1016/j.cobme.2017.09.011
- Zhou, X., & Marslen-Wilson, W. D. (1999). Phonology, orthography, and semantic activation in reading Chinese. Journal of Memory and Language, 41, 579–606.
Related Articles
- Theoretical Phonology
- Contrastive Specification in Phonology
- Articulatory Phonetics
- Tone
- Speech Perception and Generalization Across Talkers and Accents
- Korean Phonetics and Phonology
- Clinical Linguistics
- Speech Perception in Phonetics
- Coarticulation
- Sign Language Phonology
- Second Language Phonetics
- The Phonetics of Babbling
- Direct Perception of Speech
- Formants
- Phonetics of Singing in Western Classical Style