You are looking at 21-40 of 264 articles
Gerrit Jan Dimmendaal
Nilo-Saharan, a phylum spread mainly across an area south of the Afro-Asiatic and north of the Niger-Congo phylum, was established as a genetic grouping by Greenberg. In his earlier, continent-wide classification of African languages in articles published between 1949 and 1954, Greenberg had proposed a Macro-Sudanic family (renamed Chari-Nile in subsequent studies), consisting of a Central Sudanic and an Eastern Sudanic branch plus two isolated members, Berta and Kunama. This family formed the core of the Nilo-Saharan phylum as postulated by Greenberg in his The Languages of Africa, where a number of groups were added which had been treated as isolated units in his earlier classificatory work: Songhay, Eastern Saharan (now called Saharan), Maban and Mimi, Nyangian (now called Kuliak or Rub), Temainian (Temeinian), Coman (Koman), and Gumuz.
Presenting an “encyclopaedic survey” of morphological structures for the more than 140 languages belonging to this phylum is impossible in such a brief study, also given the tremendous genetic distance between some of the major subgroups. Instead, typological variation in the morphological structure of these genetically-related languages will be central. In concrete terms this involves synchronic and diachronic observations on their formal properties (section 2), followed by an introduction to the nature of derivation, inflection, and compounding properties in Nilo-Saharan (section 3). This traditional compartmentalization has its limits because it misses out on the interaction with lexical structures and morphosyntactic properties in its extant members, as argued in section 4. As pointed out in section 5, language contact also must have played an important role in the geographical spreading of several of these typological properties.
Lawrence D. Rosenblum
Research on visual and audiovisual speech information has profoundly influenced the fields of psycholinguistics, perception psychology, and cognitive neuroscience. Visual speech findings have provided some of most the important human demonstrations of our new conception of the perceptual brain as being supremely multimodal. This “multisensory revolution” has seen a tremendous growth in research on how the senses integrate, cross-facilitate, and share their experience with one another.
The ubiquity and apparent automaticity of multisensory speech has led many theorists to propose that the speech brain is agnostic with regard to sense modality: it might not know or care from which modality speech information comes. Instead, the speech function may act to extract supramodal informational patterns that are common in form across energy streams. Alternatively, other theorists have argued that any common information existent across the modalities is minimal and rudimentary, so that multisensory perception largely depends on the observer’s associative experience between the streams. From this perspective, the auditory stream is typically considered primary for the speech brain, with visual speech simply appended to its processing. If the utility of multisensory speech is a consequence of a supramodal informational coherence, then cross-sensory “integration” may be primarily a consequence of the informational input itself. If true, then one would expect to see evidence for integration occurring early in the perceptual process, as well in a largely complete and automatic/impenetrable manner. Alternatively, if multisensory speech perception is based on associative experience between the modal streams, then no constraints on how completely or automatically the senses integrate are dictated. There is behavioral and neurophysiological research supporting both perspectives.
Much of this research is based on testing the well-known McGurk effect, in which audiovisual speech information is thought to integrate to the extent that visual information can affect what listeners report hearing. However, there is now good reason to believe that the McGurk effect is not a valid test of multisensory integration. For example, there are clear cases in which responses indicate that the effect fails, while other measures suggest that integration is actually occurring. By mistakenly conflating the McGurk effect with speech integration itself, interpretations of the completeness and automaticity of multisensory may be incorrect. Future research should use more sensitive behavioral and neurophysiological measures of cross-modal influence to examine these issues.
William R. Leben
Autosegments were introduced by John Goldsmith in his 1976 M.I.T. dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score.
The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris’s 1944 article, “Simultaneous components in phonology” and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky and Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions.
An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the Well-Formedness Condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice versa. Tones thus spread more or less automatically to segments lacking them. The Well-Formedness Condition, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before Optimality Theory.
One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena, including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments.
The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology. Autosegmental representations and processes also provide a means of representing non-concatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages.
Later work modified many of the key properties of the autosegmental model. Optimality Theory has led to a radical rethinking of autosegmental mapping, delinking, and spreading as they were formulated under the earlier derivational paradigm.
Bert Le Bruyn, Henriëtte de Swart, and Joost Zwarts
Bare nominals (also called “bare nouns”) are nominal structures without an overt article or other determiner. The distinction between a bare noun and a noun that is part of a larger nominal structure must be made in context: Milk is a bare nominal in I bought milk, but not in I bought the milk. Bare nouns have a limited distribution: In subject or object position, English allows bare mass nouns and bare plurals, but not bare singular count nouns (*I bought table). Bare singular count nouns only appear in special configurations, such as coordination (I bought table and chairs for £182).
From a semantic perspective, it is noteworthy that bare nouns achieve reference without the support of a determiner. A full noun phrase like the cookies refers to the maximal sum of cookies in the context, because of the definite article the. English bare plurals have two main interpretations: In generic sentences they refer to the kind (Cookies are sweet), in episodic sentences they refer to some exemplars of the kind (Cookies are in the cabinet). Bare nouns typically take narrow scope with respect to other scope-bearing operators like negation.
The typology of bare nouns reveals substantial variation, and bare nouns in languages other than English may have different distributions and meanings. But genericity and narrow scope are recurring features in the cross-linguistic study of bare nominals.
Since the start of the Islamic conquest of the Maghreb in the 7th century
Linguistic influence is found on all levels: phonology, morphology, syntax, and lexicon. In those cases where only innovative patterns are shared between the two language groups, it is often difficult to make out where the innovation started; thus the great similarities in syllable structure between Maghrebian Arabic and northern Berber are the result of innovations within both language families, and it is difficult to tell where it started. Morphological influence seems to be mediated exclusively by lexical borrowing. Especially in Berber, this has led to parallel systems in the morphology, where native words always have native morphology, while loans either have nativized morphology or retain Arabic-like patterns. In the lexicon, it is especially Berber that takes over scores of loanwords from Arabic, amounting in one case to over one-third of the basic lexicon as defined by 100-word lists.
Tej K. Bhatia
Bilingualism/multilingualism is a natural phenomenon worldwide. Unwittingly, however, monolingualism has been used as a standard to characterize and define bilingualism/multilingualism in linguistic research. Such a conception led to a “fractional,” “irregular,” and “distorted” view of bilingualism, which is becoming rapidly outmoded in the light of multipronged, rapidly growing interdisciplinary research. This article presents a complex and holistic view of bilinguals and multilinguals on conceptual, theoretical, and pragmatic/applied grounds. In that process, it attempts to explain why bilinguals are not a mere composite of two monolinguals. If bilinguals were a clone of two monolinguals, the study of bilingualism would not merit any substantive consideration in order to come to grips with bilingualism; all one would have to do is focus on the study of a monolingual person. Interestingly, even the two bilinguals are not clones of each other, let alone bilinguals as a set of two monolinguals. This paper examines the multiple worlds of bilinguals in terms of their social life and social interaction. The intricate problem of defining and describing bilinguals is addressed; their process and end result of becoming bilinguals is explored alongside their verbal interactions and language organization in the brain. The role of social and political bilingualism is also explored as it interacts with individual bilingualism and global bilingualism (e.g., the issue of language endangerment and language death).
Other central concepts such as individuals’ bilingual language attitudes, language choices, and consequences are addressed, which set bilinguals apart from monolinguals. Language acquisition is as much an innate, biological, as social phenomenon; these two complementary dimensions receive consideration in this article along with the educational issues of school performance by bilinguals. Is bilingualism a blessing or a curse? The linguistic and cognitive consequences of individual, societal, and political bilingualism are examined.
Cedric Boeckx and Pedro Tiago Martins
All humans can acquire at least one natural language. Biolinguistics is the name given to the interdisciplinary enterprise that aims to unveil the biological bases of this unique capacity.
Blending is a type of word formation in which two or more words are merged into one so that the blended constituents are either clipped, or partially overlap. An example of a typical blend is brunch, in which the beginning of the word breakfast is joined with the ending of the word lunch. In many cases such as motel (motor + hotel) or blizzaster (blizzard + disaster) the constituents of a blend overlap at segments that are phonologically or graphically identical. In some blends, both constituents retain their form as a result of overlap, for example, stoption (stop + option). These examples illustrate only a handful of the variety of forms blends may take; more exotic examples include formations like Thankshallowistmas (Thanksgiving + Halloween + Christmas). The visual and audial amalgamation in blends is reflected on the semantic level. It is common to form blends meaning a combination or a product of two objects or phenomena, such as an animal breed (e.g., zorse, a breed of zebra and horse), an interlanguage variety (e.g., franglais, which is a French blend of français and anglais meaning a mixture of French and English languages), or other type of mix (e.g., a shress is a type of clothes having features of both a shirt and a dress).
Blending as a word formation process can be regarded as a subtype of compounding because, like compounds, blends are formed of two (or sometimes more) content words and semantically either are hyponyms of one of their constituents, or exhibit some kind of paradigmatic relationships between the constituents. In contrast to compounds, however, the formation of blends is restricted by a number of phonological constraints given that the resulting formation is a single word. In particular, blends tend to be of the same length as the longest of their constituent words, and to preserve the main stress of one of their constituents. Certain regularities are also observed in terms of ordering of the words in a blend (e.g., shorter first, more frequent first), and in the position of the switch point, that is, where one blended word is cut off and switched to another (typically at the syllable boundary or at the onset/rime boundary). The regularities of blend formation can be related to the recognizability of the blended words.
Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century
Bracketing paradoxes—constructions whose morphosyntactic and morpho-phonological structures appear to be irreconcilably at odds (e.g., unhappier)—are unanimously taken to point to truths about the derivational system that we have not yet grasped. Consider that the prefix un- must be structurally separate in some way from happier both for its own reasons (its [n] surprisingly does not assimilate in Place to a following consonant (e.g., u[n]popular)), and for reasons external to the prefix (the suffix -er must be insensitive to the presence of un-, as the comparative cannot attach to bases of three syllables or longer (e.g., *intelligenter)). But, un- must simultaneously be present in the derivation before -er is merged, so that unhappier can have the proper semantic reading (‘more unhappy’, and not ‘not happier’). Bracketing paradoxes emerged as a problem for generative accounts of both morphosyntax and morphophonology only in the 1970s. With the rise of restrictions on and technology used to describe and represent the behavior of affixes (e.g., the Affix-Ordering Generalization, Lexical Phonology and Morphology, the Prosodic Hierarchy), morphosyntacticians and phonologists were confronted with this type of inconsistent derivation in many unrelated languages.
Andrej L. Malchukov
Morphological case is conventionally defined as a system of marking of a dependent nominal for the type of relationship they bear to their heads. While most linguists would agree with this definition, in practice it is often a matter of controversy whether a certain marker X counts as case in language L, or how many case values language L features. First, the distinction between morphological cases and case particles/adpositions is fuzzy in a cross-linguistic perspective. Second, the distinctions between cases can be obscured by patterns of case syncretism, leading to different analyses of the underlying system. On the functional side, it is important to distinguish between syntactic (structural), semantic, and “pragmatic” cases, yet these distinctions are not clear-cut either, as syntactic cases historically arise from the two latter sources. Moreover, case paradigms of individual languages usually show a conflation between syntactic, semantic, and pragmatic cases (see the phenomenon of “focal ergativity,” where ergative case is used when the A argument is in focus). The composition of case paradigms can be shown to follow a certain typological pattern, which is captured by case hierarchy, as proposed by Greenberg and Blake, among others. Case hierarchy constrains the way how case systems evolve (or are reduced) across languages and derives from relative markedness and, ultimately, from frequencies of individual cases. The (one-dimensional) case hierarchy is, however, incapable of capturing all recurrent polysemies of individual case markers; rather, such polysemies can be represented through a more complex two-dimensional hierarchy (semantic map), which can also be given a diachronic interpretation.
Jessica Coon and Clint Parker
The phenomenon of case has been studied widely at both the descriptive and theoretical levels. Typological work on morphological case systems has provided a picture of the variability of case cross-linguistically. In particular, languages may differ with respect to whether or not arguments are marked with overt morphological case, the inventory of cases with which they may be marked, and the alignment of case marking (e.g., nominative-accusative vs. ergative-absolutive). In the theoretical realm, not only has morphological case been argued to play a role in multiple syntactic phenomena, but current generative work also debates the role of abstract case (i.e., Case) in the grammar: abstract case features have been proposed to underlie morphological case, and to license nominals in the derivation.
The phenomenon of case has been argued to play a role in at least three areas of the syntax reviewed here: (a) agreement, (b) A-movement, and (c) A’-movement. Morphological case has been shown to determine a nominal argument’s eligibility to participate in verbal agreement, and recent work has argued that languages vary as to whether movement to subject position is case-sensitive. As for case-sensitive A’-movement, recent literature on ergative extraction restrictions debates whether this phenomenon should be seen as another instance of “case discrimination” or whether the pattern arises from other properties of ergative languages. Finally, other works discussed here have examined agreement and A’-extraction patterns in languages with no visible case morphology. The presence of patterns and typological gaps—both in languages with overt morphological case and in those without it—lends support to the relevance of abstract case in the syntax.
Languages from at least five genetically unrelated families are spoken in the Caucasus, but there are only three endemic linguistic families belonging to the region: Kartvelian, West Caucasian, and Northeast Caucasian. These families are rather heterogeneous in terms of the number of languages and the distribution of the speakers across them. The Caucasus represents a situation where languages with millions of speakers have coexisted with one-village languages for hundreds of years, and where multilingualism has always been the norm. The richness of Caucasian languages on every linguistic stratum is dazzling: here we find some of the largest consonant inventories, inflectional systems where the mere number of word forms strains credibility (one of the Caucasian languages, Archi, is claimed to have over a million and a half word forms), and challenging syntactic structures. The typological interest of the Caucasian languages and the challenges they present to linguistic theory lie in different areas. Thus, for Kartvelian languages, the number of factors at play in the verbal system make the task of the production of a correct verbal form far from trivial. West Caucasian languages represent an instance of polysynthetic polypersonal verb inflection, which is unusual not only for Caucasus but for Eurasia in general. East Caucasian languages have large systems of non-finite forms which, unusually, retain the ability to realize agreement in gender and number while their non-finite nature is determined by the inability to head an independent clause and to express certain morpho-syntactic categories such as illocutionary force and evidentiality. Finally, all Caucasian languages are ergative to some extent.
Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.
Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.
Haihua Pan and Yuli Feng
Cross-linguistic data can add new insights to the development of semantic theories or even induce the shift of the research paradigm. The major topics in semantic studies such as bare noun denotation, quantification, degree semantics, polarity items, donkey anaphora and binding principles, long-distance reflexives, negation, tense and aspects, eventuality are all discussed by semanticists working on the Chinese language. The issues which are of particular interest include and are not limited to: (i) the denotation of Chinese bare nouns; (ii) categorization and quantificational mapping strategies of Chinese quantifier expressions (i.e., whether the behaviors of Chinese quantifier expressions fit into the dichotomy of A-Quantification and D-quantification); (iii) multiple uses of quantifier expressions (e.g., dou) and their implication on the inter-relation of semantic concepts like distributivity, scalarity, exclusiveness, exhaustivity, maximality, etc.; (iv) the interaction among universal adverbials and that between universal adverbials and various types of noun phrases, which may pose a challenge to the Principle of Compositionality; (v) the semantics of degree expressions in Chinese; (vi) the non-interrogative uses of wh-phrases in Chinese and their influence on the theories of polarity items, free choice items, and epistemic indefinites; (vii) how the concepts of E-type pronouns and D-type pronouns are manifested in the Chinese language and whether such pronoun interpretations correspond to specific sentence types; (viii) what devices Chinese adopts to locate time (i.e., does tense interpretation correspond to certain syntactic projections or it is solely determined by semantic information and pragmatic reasoning); (ix) how the interpretation of Chinese aspect markers can be captured by event structures, possible world semantics, and quantification; (x) how the long-distance binding of Chinese ziji ‘self’ and the blocking effect by first and second person pronouns can be accounted for by the existing theories of beliefs, attitude reports, and logophoricity; (xi) the distribution of various negation markers and their correspondence to the semantic properties of predicates with which they are combined; and (xii) whether Chinese topic-comment structures are constrained by both semantic and pragmatic factors or syntactic factors only.
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.
Clitics can be defined as prosodically defective function words. They can belong to a number of syntactic categories, such as articles, pronouns, prepositions, complementizers, negative adverbs, or auxiliaries. They do not generally belong to open classes, like verbs, nouns, or adjectives. Their prosodically defective character is most often manifested by the absence of stress, which in turn correlates with vowel reduction in those languages that have it independently; sometimes the clitic can be just a consonant or a consonant cluster, with no vowel. This same prosodically defective character forces them to attach either to the word that follows them (proclisis) or to the word that precedes them (enclisis); in some cases they even appear inside a word (mesoclisis or endoclisis). The word to which a clitic attaches is called the host. In some languages (like some dialects of Italian or Catalan) enclitics can surface as stressed, but the presence of stress can be argued to be the result of assignment of stress to the host-clitic complex, not to the clitic itself. One consequence of clitics being prosodically defective is that they cannot be the sole element of an utterance, for instance as an answer to some question; they need to always appear with a host.
A useful distinction is that between simple clitics and special clitics. Simple clitics often have a nonclitic variant and appear in the expected syntactic position for nonclitics of their syntactic category. Much more attention has been paid in the literature to special clitics. Special clitics appear in a designated position within the clause or within the noun phrase (or determiner phrase). In several languages certain clitics must appear in second position, within the clause, as in most South Slavic languages, or within the noun phrase, as in Kwakw'ala. The pronominal clitics of Romance languages or Greek must have the verb as a host and appear in a position different from the full noun phrase. A much debated question is whether the position of special clitics is the result of syntactic movement, or whether other factors, morphological or phonological, intervene as well or are the sole motivation for their position. Clitics can also cluster, with some languages allowing only sequences of two clitics, and other languages allowing longer sequences. Here one relevant question is what determines the order of the clitics, with the main avenues of analysis being approaches based on syntactic movement, approaches based on the types of morphosyntactic features each clitic has, and approaches based on templates. An additional issue concerning clitic clusters is the incompatibility between specific clitics when combined and the changes that this incompatibility can provoke in the form of one or more of the clitics. Combinations of identical or nearly identical clitics are often disallowed, and the constraint known as the Person-Case Constraint (PCC) disallows combinations of clitics with a first or second person accusative clitic (a direct object, DO, clitic) and a third person (and sometimes also first or second person) dative clitic (an indirect object, IO, clitic). In all these cases either one of the clitics surfaces with the form of another clitic or one of the clitics does not surface; sometimes there is no possible output. Here again both syntactic and morphological approaches have been proposed.
The study of coarticulation—namely, the articulatory modification of a given speech sound arising from coproduction or overlap with neighboring sounds in the speech chain—has attracted the close attention of phonetic researchers for at least the last 60 years. Knowledge about coarticulatory patterns in speech should provide information about the planning mechanisms of consecutive consonants and vowels and the execution of coordinative articulatory structures during the production of those segmental units. Coarticulatory effects involve changes in articulatory displacement over time toward the left (anticipatory) or the right (carryover) of the trigger, and their typology and extent depend on the articulator under investigation (lip, velum, tongue, jaw, larynx) and the articulatory characteristics of the individual consonants and vowels, as well as nonsegmental factors such as speech rate, stress, and language. A challenge for studying coarticulation is that different speakers may use different coarticulatory mechanisms when producing a given phonemic sequence and they also use coarticulatory information differently for phonemic identification in perception. More knowledge about all these research issues should contribute to a deeper understanding of coarticulation deficits in speakers with speech disorders, how the ability to coarticulate develops from childhood to adulthood, and the extent to which the failure to compensate for coarticulatory effects may give rise to sound change.
There are two main theoretical traditions in semantics. One is based on realism, where meanings are described as relations between language and the world, often in terms of truth conditions. The other is cognitivistic, where meanings are identified with mental structures. This article presents some of the main ideas and theories within the cognitivist approach.
A central tenet of cognitively oriented theories of meaning is that there are close connections between the meaning structures and other cognitive processes. In particular, parallels between semantics and visual processes have been studied. As a complement, the theory of embodied cognition focuses on the relation between actions and components of meaning.
One of the main methods of representing cognitive meaning structures is to use images schemas and idealized cognitive models. Such schemas focus on spatial relations between various semantic elements. Images schemas are often constructed using Gestalt psychological notions, including those of trajector and landmark, corresponding to figure and ground. In this tradition, metaphors and metonymies are considered to be central meaning transforming processes.
A related approach is force dynamics. Here, the semantic schemas are construed from forces and their relations rather than from spatial relations. Recent extensions involve cognitive representations of actions and events, which then form the basis for a semantics of verbs.
A third approach is the theory of conceptual spaces. In this theory, meanings are represented as regions of semantic domains such as space, time, color, weight, size, and shape. For example, strong evidence exists that color words in a large variety of languages correspond to such regions. This approach has been extended to a general account of the semantics of some of the main word classes, including adjectives, verbs, and prepositions. The theory of conceptual spaces shows similarities to the older frame semantics and feature analysis, but it puts more emphasis on geometric structures.
A general criticism against cognitive theories of semantics is that they only consider the meaning structures of individuals, but neglect the social aspects of semantics, that is, that meanings are shared within a community. Recent theoretical proposals counter this by suggesting that semantics should be seen as a meeting of minds, that is, communicative processes that lead to the alignment of meanings between individuals. On this approach, semantics is seen as a product of communication, constrained by the cognitive mechanisms of the individuals.