The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).
Article
Kodi Weatherholtz and T. Florian Jaeger
Article
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.
Article
Thomas W. Stewart
Segment-level alternations that realize morphological properties or that have other morphological significance stand either at an interface or along a continuum between phonology and morphology. The typical source for morphologically correlated sound alternations is the automatic phonology, interacting with discrete morphological operations such as affixation. Traditional morphophonology depends on the association of an alternation with a distinct concatenative marker, but the rise of stem changes that are in themselves morphological markers, be they inflectional or derivational, resides in the fading of phonetic motivation in the conditioning environment, and thus an increase in independence from historical phonological sources. The clearest cases are sole-exponent alternations, such as English man~men or slide~slid, but it is not necessary that the remainder of an earlier conditioning affix be entirely absent, only that synchronic conditioning is fully opaque. Once a sound-structural pattern escapes the unexceptional workings of a language's general phonological patterning, yet reliably serves a signifying function for one or more morphological properties, the morphological component of the grammar bears a primary if not sole responsibility for accounting for the pattern’s distribution.
It is not uncommon for the transition of analysis into morphology from (morpho)phonology to be a fitful one. There is an established tendency for phonological theory to hold sway in matters of sound generally, even at the expense of challenging learnability through the introduction of remote representations, ad hoc triggering devices, or putative rules of phonology of very limited generality. On the morphological side, a bias in favor of separable morpheme-like units and syntax-like concatenative dynamics has relegated relations like stem alternations to the margins, no matter how regular, productive, or distinct from general phonological patterns in the language in question overall. This parallel focus of each component on a "specialization" as it were has left exactly morphologically significant stem alternations such as Germanic Ablaut and Celtic initial-consonant mutation poorly served. In both families, these robust sound patterns generally lack reliable synchronic phonological conditioning. Instead, one must crucially refer to grammatical structure and morphological properties in order to account for their distributions. It is no coincidence that such stem alternations look phonological, just as fossils resemble the forms of the organisms that left them. The work of morphology likewise does not depend on alternant segments sharing aspects of sound, but the salience of the system may benefit from perceptible coherence of form. One may observe what sound relations exist between stem alternants, but it is neither necessary nor realistic to oblige a speaker/learner to generate established stem alternations anew from remote underlying representations, as if the alternations were always still arising; to do so constitutes a grafting of the technique of internal reconstruction as a recapitulating simulation within the synchronic grammar.
Article
Stela Manova
Subtraction consists in shortening the shape of the word. It operates on morphological bases such as roots, stems, and words in word-formation and inflection. Cognitively, subtraction is the opposite of affixation, since the latter adds meaning and form (an overt affix) to roots, stems, or words, while the former adds meaning through subtraction of form. As subtraction and affixation work at the same level of grammar (morphology), they sometimes compete for the expression of the same semantics in the same language, for example, the pattern ‘science—scientist’ in German has derivations such as Physik ‘physics’—Physik-er ‘physicist’ and Astronom-ie ‘astronomy’—Astronom ‘astronomer’. Subtraction can delete phonemes and morphemes. In case of phoneme deletion, it is usually the final phoneme of a morphological base that is deleted and sometimes that phoneme can coincide with a morpheme.
Some analyses of subtraction(-like shortenings) rely not on morphological units (roots, stems, morphological words, affixes) but on the phonological word, which sometimes results in alternative definitions of subtraction. Additionally, syntax-based theories of morphology that do not recognize a morphological component of grammar and operate only with additive syntactic rules claim that subtraction actually consists in addition of defective phonological material that causes adjustments in phonology and leads to deletion of form on the surface. Other scholars postulate subtraction only if the deleted material does not coincide with an existing morpheme elsewhere in the language and if it does, they call the change backformation. There is also some controversy regarding what is a proper word-formation process and whether what is derived by subtraction is true word-formation or just marginal or extragrammatical morphology; that is, the question is whether shortenings such as hypocoristics and clippings should be treated on par with derivations such as, for example, the pattern of science-scientist.
Finally, research in subtraction also faces terminology issues in the sense that in the literature different labels have been used to refer to subtraction(-like) formations: minus feature, minus formation, disfixation, subtractive morph, (subtractive) truncation, backformation, or just shortening.
Article
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Article
Erich R. Round
The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.
Article
This article introduces two phenomena that are studied within the domain of templatic morphology—clippings and word-and-pattern morphology, where the latter is usually associated with Semitic morphology. In both cases, the words are of invariant shape, sharing a prosodic structure defined in terms of number of syllables. This prosodic template, being the core of the word structure, is often accompanied with one or more of the following properties: syllable structure, vocalic pattern, and an affix. The data in this article, drawn from different languages, display the various ways in which these structural properties are combined to determine the surface structure of the word. The invariant shape of Japanese clippings (e.g., suto ← sutoraiki ‘strike’) consists of a prosodic template alone, while that of English hypocoristics (e.g., Trudy ← Gertrude) consists of a prosodic template plus the suffix -i. The Arabic verb classes, such as class-I (e.g., sakan ‘to live’) and class-II (e.g., misek ‘to hold’), display a prosodic template plus a vocalic pattern, and the Hebrew verb class-III (e.g., hivdil ‘to distinguish’) displays a prosodic template, a vocalic pattern and a prefix. Given these structural properties, the relation between a base and its derived form is expressed in terms of stem modification, which involves truncation (for the prosodic template) and melodic overwriting (for the vocalic pattern). The discussion in this article suggests that templatic morphology is not limited to a particular lexicon type – core or periphery, but it displays different degrees of restrictiveness.
Article
Paul de Lacy
Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain.
The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production.
There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries.
Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them.
Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them.
The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.
Article
Amalia Arvaniti
Prosody is an umbrella term used to cover a variety of interconnected and interacting phenomena, namely stress, rhythm, phrasing, and intonation. The phonetic expression of prosody relies on a number of parameters, including duration, amplitude, and fundamental frequency (F0). The same parameters are also used to encode lexical contrasts (such as tone), as well as paralinguistic phenomena (such as anger, boredom, and excitement). Further, the exact function and organization of the phonetic parameters used for prosody differ across languages. These considerations make it imperative to distinguish the linguistic phenomena that make up prosody from their phonetic exponents, and similarly to distinguish between the linguistic and paralinguistic uses of the latter. A comprehensive understanding of prosody relies on the idea that speech is prosodically organized into phrasal constituents, the edges of which are phonetically marked in a number of ways, for example, by articulatory strengthening in the beginning and lengthening at the end. Phrases are also internally organized either by stress, that is around syllables that are more salient relative to others (as in English and Spanish), or by the repetition of a relatively stable tonal pattern over short phrases (as in Korean, Japanese, and French). Both types of organization give rise to rhythm, the perception of speech as consisting of groups of a similar and repetitive pattern. Tonal specification over phrases is also used for intonation purposes, that is, to mark phrasal boundaries, and express information structure and pragmatic meaning. Taken together, the components of prosody help with the organization and planning of speech, while prosodic cues are used by listeners during both language acquisition and speech processing. Importantly, prosody does not operate independently of segments; rather, it profoundly affects segment realization, making the incorporation of an understanding of prosody into experimental design essential for most phonetic research.
Article
Bert Remijsen
When the phonological form of a morpheme—a unit of meaning that cannot be decomposed further into smaller units of meaning—involves a particular melodic pattern as part of its sound shape, this morpheme is specified for tone. In view of this definition, phrase- and utterance-level melodies—also known as intonation—are not to be interpreted as instances of tone. That is, whereas the question “Tomorrow?” may be uttered with a rising melody, this melody is not tone, because it is not a part of the lexical specification of the morpheme tomorrow. A language that presents morphemes that are specified with specific melodies is called a tone language. It is not the case that in a tone language every morpheme, content word, or syllable would be specified for tone. Tonal specification can be highly restricted within the lexicon. Examples of such sparsely specified tone languages include Swedish, Japanese, and Ekagi (a language spoken in the Indonesian part of New Guinea); in these languages, only some syllables in some words are specified for tone. There are also tone languages where each and every syllable of each and every word has a specification. Vietnamese and Shilluk (a language spoken in South Sudan) illustrate this configuration. Tone languages also vary greatly in terms of the inventory of phonological tone forms. The smallest possible inventory contrasts one specification with the absence of specification. But there are also tone languages with eight or more distinctive tone categories. The physical (acoustic) realization of the tone categories is primarily fundamental frequency (F0), which is perceived as pitch. However, often other phonetic correlates are also involved, in particular voice quality. Tone plays a prominent role in the study of phonology because of its structural complexity. That is, in many languages, the way a tone surfaces is conditioned by factors such as the segmental composition of the morpheme, the tonal specifications of surrounding constituents, morphosyntax, and intonation. On top of this, tone is diachronically unstable. This means that, when a language has tone, we can expect to find considerable variation between dialects, and more of it than in relation to other parts of the sound system.
Article
Alexis Michaud and Bonny Sands
Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.
Article
Silvio Cruschina
Topic and topicalization are key notions to understand processes of syntactic and prosodic readjustments in Romance. More specifically, topicalization refers to the syntactic mechanisms and constructions available in a language to mark an expression as the topic of the sentence. Despite the lack of a uniform definition of topic, often based on the notions of aboutness or givenness, significant advances have been made in Romance linguistics since the 1990s, yielding a better understanding of the topicalization constructions, their properties, and their grammatical correlates. Prosodically, topics are generally described as being contained in independent intonational phrases. The syntactic and pragmatic characteristics of a specific topicalization construction, by contrast, depend both on the form of resumption of the dislocated topic within the clause and on the types of topic (aboutness, given, and contrastive topics). We can thus distinguish between hanging topic (left dislocation) (HTLD) and clitic left-dislocation (ClLD) for sentence-initial topics, and clitic right-dislocation (ClRD) for sentence-final dislocated constituents. These topicalization constructions are available in most Romance languages, although variation may affect the type and the obligatory presence of the resumptive element.
Scholars working on topic and topicalization in the Romance languages have also addressed controversial issues such as the relation between topics and subjects, both grammatical (nominative) subjects and ‘oblique’ subjects such as dative experiencers and locative expressions. Moreover, topicalization has been discussed for medieval Romance, in conjunction with its alleged V2 syntactic status. Some topicalization constructions such as subject inversion, especially in the non-null subject Romance languages, and Resumptive Preposing may indeed be viewed as potential residues of medieval V2 property in contemporary Romance.
Article
Claire Brierley and Barry Heselwood
Phonetic transcription represents the phonetic properties of an actual or potential utterance in a written form. Firstly, it is necessary to have an understanding of what the phonetic properties of speech are. It is the role of phonetic theory to provide that understanding by constructing a set of categories that can account for the phonetic structure of speech at both the segmental and suprasegmental levels; how far it does so is a measure of its adequacy as a theory. Secondly, a set of symbols is needed that stand for these categories. Also required is a set of conventions that tell the reader what the symbols stand for. A phonetic transcription, then, can be said to represent a piece of speech in terms of the categories denoted by the symbols. Machine-readable phonetic and prosodic notation systems can be implemented in electronic speech corpora, where multiple linguistic information tiers, such as text and phonetic transcriptions, are mapped to the speech signal. Such corpora are essential resources for automated speech recognition and speech synthesis.
Article
Davide Ricca
The Romance languages, despite their overall similarity, display interesting internal diversity which can be captured only very partially by looking at the six major standard languages, as typological databases often do. This diversity spans over all the levels of linguistic analysis, from phonology to morphology and syntax. Rather than making a long list of features, with no space to go much beyond their mere mention, the article focusses on just four main areas in a little more detail, trying to develop, if minimally, a discussion on their theoretical and methodological import.
The comparison with the full-world typological background given by the WALS Online shows that the differences within Romance may reach the level of general typological relevance. While this is probably not the case in their rather mainstream segmental phonology, it surely holds regarding nominal pluralization and the syntax of negation, which are both areas where the Romance languages have often distanced themselves quite significantly from their common ancestor, Latin. The morphological marking of nominal plural displays four values out of the seven recorded in WALS, adding a further one unattested there, namely subtraction; the negation strategies, although uniformly particle-like, cover all the five values found in WALS concerning linear order. Finally, Romance languages suggest several intriguing issues related with head-marking and dependent-marking constructions, again innovating against the substantially dependent-marking uniformity characteristic of Latin.
Article
Harry van der Hulst
The subject of this article is vowel harmony. In its prototypical form, this phenomenon involves agreement between all vowels in a word for some phonological property (such as palatality, labiality, height or tongue root position). This agreement is then evidenced by agreement patterns within morphemes and by alternations in vowels when morphemes are combined into complex words, thus creating allomorphic alternations. Agreement involves one or more harmonic features for which vowels form harmonic pairs, such that each vowel has a harmonic counterpart in the other set. I will focus on vowels that fail to alternate, that are thus neutral (either inherently or in a specific context), and that will be either opaque or transparent to the process. We will compare approaches that use underspecification of binary features and approaches that use unary features. For vowel harmony, vowels are either triggers or targets, and for each, specific conditions may apply. Vowel harmony can be bidirectional or unidirectional and can display either a root control pattern or a dominant/recessive pattern.
Article
Lea Schäfer
The Yiddish language is directly linked to the culture and destiny of the Jewish population of Central and Eastern Europe. It originated as the everyday language of the Jewish population in the German-speaking lands around the Middle Ages and underwent a series of developments until the Shoah, which took a particularly large toll on the Yiddish-speaking Eastern European Jewish population. Today, Yiddish is spoken as a mother tongue almost exclusively in ultra-Orthodox communities, where it is now exposed to entirely new influences and is, thus, far from being a dead language.
After an introductory sketch, information on the geographical distribution and number of speakers as well as key historical developments are briefly summarized. Particularly important are the descriptions of the various sociolinguistic situations and the source situation. This is followed by a description of various (failed) attempts at standardization, as well as the geographical distribution and surveys of the dialects. The following section describes the status of Yiddish in the early 21st century, which overlaps with the sociolinguistic situation of Orthodox Yiddish. Finally, the linguistic features of modern Eastern Yiddish (dialects, standard, and Orthodox) are presented. In this context, linguistic levels and structures in which Yiddish differs from other (standard) Germanic languages are also discussed. Since Yiddish, as a language derived from Middle High German, is particularly close to German varieties, the differences and similarities between the two languages are particularly emphasized.
Article
Eystein Dahl and Antonio Fábregas
Zero or null morphology refers to morphological units that are devoid of phonological content. Whether such entities should be postulated is one of the most controversial issues in morphological theory, with disagreements in how the concept should be delimited, what would count as an instance of zero morphology inside a particular theory, and whether such objects should be allowed even as mere analytical instruments.
With respect to the first problem, given that zero morphology is a hypothesis that comes from certain analyses, delimiting what counts as a zero morpheme is not a trivial matter. The concept must be carefully differentiated from others that intuitively also involve situations where there is no overt morphological marking: cumulative morphology, phonological deletion, etc.
About the second issue, what counts as null can also depend on the specific theories where the proposal is made. In the strict sense, zero morphology involves a complete morphosyntactic representation that is associated to zero phonological content, but there are other notions of zero morphology that differ from the one discussed here, such as absolute absence of morphological expression, in addition to specific theory-internal interpretations of what counts as null. Thus, it is also important to consider the different ways in which something can be morphologically silent.
Finally, with respect to the third side of the debate, arguments are made for and against zero morphology, notably from the perspectives of falsifiability, acquisition, and psycholinguistics. Of particular impact is the question of which properties a theory should have in order to block the possibility that zero morphology exists, and conversely the properties that theories that accept zero morphology associate to null morphemes.
An important ingredient in this debate has to do with two empirical domains: zero derivation and paradigmatic uniformity. Ultimately, the plausibility that zero morphemes exist or not depends on the success at accounting for these two empirical patterns in a better way than theories that ban zero morphology.