1-20 of 28 Results

  • Keywords: speech x
Clear all

Article

Daniel Aalto, Jarmo Malinen, and Martti Vainio

Formant frequencies are the positions of the local maxima of the power spectral envelope of a sound signal. They arise from acoustic resonances of the vocal tract air column, and they provide substantial information about both consonants and vowels. In running speech, formants are crucial in signaling the movements with respect to place of articulation. Formants are normally defined as accumulations of acoustic energy estimated from the spectral envelope of a signal. However, not all such peaks can be related to resonances in the vocal tract, as they can be caused by the acoustic properties of the environment outside the vocal tract, and sometimes resonances are not seen in the spectrum. Such formants are called spurious and latent, respectively. By analogy, spectral maxima of synthesized speech are called formants, although they arise from a digital filter. Conversely, speech processing algorithms can detect formants in natural or synthetic speech by modeling its power spectral envelope using a digital filter. Such detection is most successful for male speech with a low fundamental frequency where many harmonic overtones excite each of the vocal tract resonances that lie at higher frequencies. For the same reason, reliable formant detection from females with high pitch or children’s speech is inherently difficult, and many algorithms fail to faithfully detect the formants corresponding to the lowest vocal tract resonant frequencies.

Article

The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.

Article

Gerard Docherty

Sociophonetics research is located at the interface of sociolinguistics and experimental phonetics. Its primary focus is to shed new light on the social-indexical phonetic properties of speech, revealing a wide range of phonetic parameters that map systematically to social factors relevant to speakers and listeners, and the fact that many of these involve particularly fine-grained control of both spatial and temporal dimensions of speech production. Recent methodological developments in acoustic and articulatory methods have yielded new insights into the nature of sociophonetic variation at the scale of entire speech communities as well as in respect of the detailed speech production patterns of individual speakers. The key theoretical dimension of sociophonetic research is to consider how models of speech production, processing, and acquisition should be informed by rapidly increasing knowledge of the ubiquity of social-indexical phonetic variation carried by the speech signal. In particular, this work is focused on inferring from the performance of speakers and listeners how social-indexical phonetic properties are interwoven into phonological representation alongside those properties associated with the transmission and interpretation of lexical-propositional information.

Article

Edward Flemming

Dispersion Theory concerns the constraints that govern contrasts, the phonetic differences that can distinguish words in a language. Specifically it posits that there are distinctiveness constraints that favor contrasts that are more perceptually distinct over less distinct contrasts. The preference for distinct contrasts is hypothesized to follow from a preference to minimize perceptual confusion: In order to recover what a speaker is saying, a listener must identify the words in the utterance. The more confusable words are, the more likely a listener is to make errors. Because contrasts are the minimal permissible differences between words in a language, banning indistinct contrasts reduces the likelihood of misperception. The term ‘dispersion’ refers to the separation of sounds in perceptual space that results from maximizing the perceptual distinctiveness of the contrasts between those sounds, and is adopted from Lindblom’s Theory of Adaptive Dispersion, a theory of phoneme inventories according to which inventories are selected so as to maximize the perceptual differences between phonemes. These proposals follow a long tradition of explaining cross-linguistic tendencies in the phonetic and phonological form of languages in terms of a preference for perceptually distinct contrasts. Flemming proposes that distinctiveness constraints constitute one class of constraints in an Optimality Theoretic model of phonology. In this context, distinctiveness constraints predict several basic phenomena, the first of which is the preference for maximal dispersion in inventories of contrasting sounds that first motivated the development of the Theory of Adaptive Dispersion. But distinctiveness constraints are formulated as constraints on the surface forms of possible words that interact with other phonological constraints, so they evaluate the distinctiveness of contrasts in context. As a result, Dispersion Theory predicts that contrasts can be neutralized or enhanced in particular phonological contexts. This prediction arises because the phonetic realization of sounds depends on their context, so the perceptual differences between contrasting sounds also depend on context. If the realization of a contrast in a particular context would be insufficiently distinct (i.e., it would violate a high-ranked distinctiveness constraint), there are two options: the offending contrast can be neutralized, or it can be modified (‘enhanced’) to make it more distinct. A basic open question regarding Dispersion Theory concerns the proper formulation of distinctiveness constraints and the extent of variation in their rankings across languages, issues that are tied up with the questions about the nature of perceptual distinctiveness. Another concerns the size and nature of the comparison set of contrasting word-forms required to be able to evaluate whether a candidate output satisfies distinctiveness constraints.

Article

Speech production is an important aspect of linguistic competence. An attempt to understand linguistic morphology without speech production would be incomplete. A central research question develops from this perspective: what is the role of morphology in speech production. Speech production researchers collect many different types of data and much of that data has informed how linguists and psycholinguists characterize the role of linguistic morphology in speech production. Models of speech production play an important role in the investigation of linguistic morphology. These models provide a framework, which allows researchers to explore the role of morphology in speech production. However, models of speech production generally focus on different aspects of the production process. These models are split between phonetic models (which attempt to understand how the brain creates motor commands for uttering and articulating speech) and psycholinguistic models (which attempt to understand the cognitive processes and representation of the production process). Models that merge these two model types, phonetic and psycholinguistic models, have the potential to allow researchers the possibility to make specific predictions about the effects of morphology on speech production. Many studies have explored models of speech production, but the investigation of the role of morphology and how morphological properties may be represented in merged speech production models is limited.

Article

The tongue is composed entirely of soft tissue: muscle, fat, and connective tissue. This unusual composition and the tongue’s 3D muscle fiber orientation result in many degrees of freedom. The lack of bones and cartilage means that muscle shortening creates deformations, particularly local deformations, as the tongue moves into and out of speech gestures. The tongue is also surrounded by the hard structures of the oral cavity, which both constrain its motion and support the rapid small deformations that create speech sounds. Anatomical descriptors and categories of tongue muscles do not correlate with tongue function as speech movements use finely controlled co-contractions of antagonist muscles to move the oral structures during speech. Tongue muscle volume indicates that four muscles, the genioglossus, verticalis, transversus, and superior longitudinal, occupy the bulk of the tongue. They also comprise a functional muscle grouping that can shorten the tongue in the x, y, and z directions. Various 3D muscle shortening patterns produce large- or small-scale deformations in all directions of motion. The interdigitation of the tongue’s muscles is advantageous in allowing co-contraction of antagonist muscles and providing nimble deformational changes to move the tongue toward and away from any position.

Article

Ocke-Schwen Bohn

The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.

Article

Paul de Lacy

Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain. The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production. There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries. Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them. Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them. The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.

Article

Marie K. Huffman

Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.

Article

This chapter deals with the discussion that has concerned and concerns the very concept of ‘word’. It considers different definitions which have been advanced according different theoretical positions. Thereafter, it examines various phenomena which are strictly bound to ‘word’: word compounds and multi-word expressions, word formation rules, word classes (or Parts-of-Speech), splinters, univerbation and, finally, word blendings

Article

Research on visual and audiovisual speech information has profoundly influenced the fields of psycholinguistics, perception psychology, and cognitive neuroscience. Visual speech findings have provided some of most the important human demonstrations of our new conception of the perceptual brain as being supremely multimodal. This “multisensory revolution” has seen a tremendous growth in research on how the senses integrate, cross-facilitate, and share their experience with one another. The ubiquity and apparent automaticity of multisensory speech has led many theorists to propose that the speech brain is agnostic with regard to sense modality: it might not know or care from which modality speech information comes. Instead, the speech function may act to extract supramodal informational patterns that are common in form across energy streams. Alternatively, other theorists have argued that any common information existent across the modalities is minimal and rudimentary, so that multisensory perception largely depends on the observer’s associative experience between the streams. From this perspective, the auditory stream is typically considered primary for the speech brain, with visual speech simply appended to its processing. If the utility of multisensory speech is a consequence of a supramodal informational coherence, then cross-sensory “integration” may be primarily a consequence of the informational input itself. If true, then one would expect to see evidence for integration occurring early in the perceptual process, as well in a largely complete and automatic/impenetrable manner. Alternatively, if multisensory speech perception is based on associative experience between the modal streams, then no constraints on how completely or automatically the senses integrate are dictated. There is behavioral and neurophysiological research supporting both perspectives. Much of this research is based on testing the well-known McGurk effect, in which audiovisual speech information is thought to integrate to the extent that visual information can affect what listeners report hearing. However, there is now good reason to believe that the McGurk effect is not a valid test of multisensory integration. For example, there are clear cases in which responses indicate that the effect fails, while other measures suggest that integration is actually occurring. By mistakenly conflating the McGurk effect with speech integration itself, interpretations of the completeness and automaticity of multisensory may be incorrect. Future research should use more sensitive behavioral and neurophysiological measures of cross-modal influence to examine these issues.

Article

Matthew B. Winn and Peggy B. Nelson

Cochlear implants (CIs) are the most successful sensory implant in history, restoring the sensation of sound to thousands of persons who have severe to profound hearing loss. Implants do not recreate acoustic sound as most of us know it, but they instead convey a rough representation of the temporal envelope of signals. This sparse signal, derived from the envelopes of narrowband frequency filters, is sufficient for enabling speech understanding in quiet environments for those who lose hearing as adults and is enough for most children to develop spoken language skills. The variability between users is huge, however, and is only partially understood. CIs provide acoustic information that is sufficient for the recognition of some aspects of spoken language, especially information that can be conveyed by temporal patterns, such as syllable timing, consonant voicing, and manner of articulation. They are insufficient for conveying pitch cues and separating speech from noise. There is a great need for improving our understanding of functional outcomes of CI success beyond measuring percent correct for word and sentence recognitions. Moreover, greater understanding of the variability experienced by children, especially children and families from various social and cultural backgrounds, is of paramount importance. Future developments will no doubt expand the use of this remarkable device.

Article

Jack Sidnell

Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases. Four important domains of research within conversation analysis are turn-taking, repair, action formation and ascription, and action sequencing.

Article

Louise Cummings

Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia. Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics. Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.

Article

In the indigenous sociolinguistic systems of West Africa, an important way of expressing—and creating—social hierarchy in interaction is through intermediaries: third parties, through whom messages are relayed. The forms of mediation vary by region, by the scale of the social hierarchy, and by the ways hierarchy is locally understood. In larger-scale systems where hierarchy is elaborate, the interacting parties include a high-status person, a mediator who ranks lower, and a third person or group—perhaps another dignitary, but potentially anyone. In smaller-scale, more egalitarian societies, the (putative) interactants could include an authoritative spirit represented by a mask, the mask’s bearer, a “translator,” and an audience. In all these systems, mediated interactions may also involve distinctive registers or vocalizations. Meanwhile, the interactional structure and its characteristic ways of speaking offer tropes and resources for expressing politeness in everyday talk. In the traditions connected with precolonial kingdoms and empires, professional praise orators deliver eulogistic performances for their higher-status patrons. This role is understood as transmission—transmitting a message from the past, or from a group, or from another dignitary—more than as creating a composition from whole cloth. The transmitter amplifies and embellishes the message; he or she does not originate it. In addition to their formal public performances, these orators serve as interpreters and intermediaries between their patrons and their patrons’ visitors. Speech to the patron is relayed through the interpreter, even if the original speaker and the patron are in the same room. Social hierarchy is thus expressed as interactional distance. In the Sahel, these social hierarchies involve a division of labor, including communicative labor, in a complex system of ranked castes and orders. The praise orators, as professional experts in the arts of language and communication, are a separate, low-ranking category (known by the French term griot). Some features of griot performance style, and the contrasting—sometimes even disfluent—verbal conduct of high-ranking aristocrats, carry over into speech registers used by persons of any social category in situations evoking hierarchy (petitioning, for example). In indigenous state systems further south, professional orators are not a separate caste, and chiefs are also supposed to have verbal skills, although still using intermediaries. Special honorific registers, such as the esoteric Akan “palace speech,” are used in the chief’s court. Some politeness forms in everyday Akan usage today echo these practices. An example of a small-scale society is the Bedik (Senegal-Guinea border), among whom masked dancers serve as the visible and auditory representation of spirit beings. The mask spirits, whose speech and conduct contrasts with their bearers’ ordinary behavior, require “translators” to relay their messages to addressees. This too is mediated communication, involving a multi-party interactional structure as well as distinctive vocalizations. Linguistic repertoires in the Sahel have long included Arabic, and Islamic learning is another source of high status, coexisting with other traditional sources and sharing some interactional patterns. The European conquest brought European languages to the top of West African linguistic hierarchies, which have remained largely in place since independence.

Article

Yvan Rose, Laetitia Almeida, and Maria João Freitas

The field of study on the acquisition of phonological productive abilities by first-language learners in the Romance languages has been largely focused on three main languages: French, Portuguese, and Spanish, including various dialects of these languages spoken in Europe as well as in the Americas. In this article, we provide a comparative survey of this literature, with an emphasis on representational phonology. We also include in our discussion observations from the development of Catalan and Italian, and mention areas where these languages, as well as Romanian, another major Romance language, would provide welcome additions to our cross-linguistic comparisons. Together, the various studies we summarize reveal intricate patterns of development, in particular concerning the acquisition of consonants across different positions within the syllable, the word, and in relation to stress, documented from both monolingual and bilingual first-language learners can be found. The patterns observed across the different languages and dialects can generally be traced to formal properties of phone distributions, as entailed by mainstream theories of phonological representation, with variations also predicted by more functional aspects of speech, including phonetic factors and usage frequency. These results call for further empirical studies of phonological development, in particular concerning Romanian, in addition to Catalan and Italian, whose phonological and phonetic properties offer compelling grounds for the formulation and testing of models of phonology and phonological development.

Article

D. H. Whalen

Phonetics is the branch of linguistics that deals with the physical realization of meaningful distinctions in spoken language. Phoneticians study the anatomy and physics of sound generation, acoustic properties of the sounds of the world’s languages, the features of the signal that listeners use to perceive the message, and the brain mechanisms involved in both production and perception. Therefore, phonetics connects most directly to phonology and psycholinguistics, but it also engages a range of disciplines that are not unique to linguistics, including acoustics, physiology, biomechanics, hearing, evolution, and many others. Early theorists assumed that phonetic implementation of phonological features was universal, but it has become clear that languages differ in their phonetic spaces for phonological elements, with systematic differences in acoustics and articulation. Such language-specific details place phonetics solidly in the domain of linguistics; any complete description of a language must include its specific phonetic realization patterns. The description of what phonetic realizations are possible in human language continues to expand as more languages are described; many of the under-documented languages are endangered, lending urgency to the phonetic study of the world’s languages. Phonetic analysis can consist of transcription, acoustic analysis, measurement of speech articulators, and perceptual tests, with recent advances in brain imaging adding detail at the level of neural control and processing. Because of its dual nature as a component of a linguistic system and a set of actions in the physical world, phonetics has connections to many other branches of linguistics, including not only phonology but syntax, semantics, sociolinguistics, and clinical linguistics as well. Speech perception has been shown to integrate information from both vision and tactile sensation, indicating an embodied system. Sign language, though primarily visual, has adopted the term “phonetics” to represent the realization component, highlighting the linguistic nature both of phonetics and of sign language. Such diversity offers many avenues for studying phonetics, but it presents challenges to forming a comprehensive account of any language’s phonetic system.

Article

Kodi Weatherholtz and T. Florian Jaeger

The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).

Article

Phonetic transcription represents the phonetic properties of an actual or potential utterance in a written form. Firstly, it is necessary to have an understanding of what the phonetic properties of speech are. It is the role of phonetic theory to provide that understanding by constructing a set of categories that can account for the phonetic structure of speech at both the segmental and suprasegmental levels; how far it does so is a measure of its adequacy as a theory. Secondly, a set of symbols is needed that stand for these categories. Also required is a set of conventions that tell the reader what the symbols stand for. A phonetic transcription, then, can be said to represent a piece of speech in terms of the categories denoted by the symbols. Machine-readable phonetic and prosodic notation systems can be implemented in electronic speech corpora, where multiple linguistic information tiers, such as text and phonetic transcriptions, are mapped to the speech signal. Such corpora are essential resources for automated speech recognition and speech synthesis.

Article

The Ancient Greeks came into contact with possibilities and problems related to ‘language’ in several respects. The earliest epics contained implicit etymological explanations, and both the pre-Socratic philosophers and the sophists were intrigued by the link between the form of words and the meaning they carried. The adaptation of the Phoenician alphabet was an additional stimulus to start reflecting on language. ‘Letters’ became the smallest unit of inquiry in Greek language thought. Of the other units, the word was seen as the most significant level. Elaborating on the philosophical foundations laid by Plato, Aristotle, and early Stoic thinkers, Alexandrian scholars started shaping a philologically oriented tradition of grammar, which was largely oriented to the study of the eight parts of speech and directed at young students of Greek literature. Within the frame of grammar, less attention was paid to the level of the sentence, which explains why syntactic issues were not intensively explored. At its inception, Greek lexicography was an ancillary tool for understanding Greek literary texts too, directed at an audience of native speakers of Greek. Hence, lexicographical projects limited to including difficult or special words. Only once Romans began to delve into the study of Greek did the composition of general lexicons become more urgent.