1-20 of 87 Results

Article

Four types of English exist in Africa, identifiable in terms of history, functions, and linguistic characteristics. West African Pidgin English has a history going back to the 15th century, 400 years before formal colonization. Creole varieties of English have a history going back to repatriation of slaves from the Caribbean and the United States in the 19th century. Second language varieties, which are the most widespread on the continent, are prototypically associated with British colonization and its education systems. L1 (First language) English occurred mostly in Southern and East Africa, and is best represented in South Africa. The latter shows significant similarities with the other major Southern Hemisphere varieties of English in Australia and New Zealand. All four subgroups of English are growing in numbers.

Article

Zygmunt Frajzyngier

Afroasiatic languages are the fourth largest linguistic phylum, spoken by some 350 million people in North, West, Central, and East Africa, in the Middle East, and in scattered communities in Europe, the United States, and the Caucasus. Some Afroasiatic languages, such as Arabic, Hausa, Amharic, Somali, and Oromo, are spoken by millions of people, while others are endangered with extinction. As of the early 21st century, the phylum is composed of six families: Egyptian (extinct), Semitic, Cushitic, Omotic, Berber, and Chadic. There are some typological features shared by all families, particularly in the domain of phonology. Languages are also typologically quite distinct with respect to syntax and functions encoded in the grammatical systems. Some Afroasiatic languages, such as Egyptian, Akkadian, Phoenician, Hebrew, Arabic, and Ge’ez, have a longtime written tradition, but for many languages no writing system has yet been proposed or adopted. The Old Semitic writing system gave rise to the modern alphabets used in thousands of unrelated contemporary languages. Two Semitic languages, Hebrew (with some Aramaic) and Arabic, were used to write the Old Testament and the Koran, the holy books of Judaism and Islam.

Article

Susan Edwards and Christos Salis

Aphasia is an acquired language disorder subsequent to brain damage in the left hemisphere. It is characterized by diminished abilities to produce and understand both spoken and written language compared with the speaker’s presumed ability pre-cerebral damage. The type and severity of the aphasia depends not only on the location and extent of the cerebral damage but also the effect the lesion has on connecting areas of the brain. Type and severity of aphasia is diagnosed in comparison with assumed normal adult language. Language changes associated with normal aging are not classed as aphasia. The diagnosis and assessment of aphasia in children, which is unusual, takes account of age norms. The most common cause of aphasia is a cerebral vascular accident (CVA) commonly referred to as a stroke, but brain damage following traumatic head injury such as road accidents or gunshot wounds can also cause aphasia. Aphasia following such traumatic events is non-progressive in contrast to aphasia arising from brain tumor, some types of infection, or language disturbances in progressive conditions such as Alzheimer’s disease, where the language disturbance increases as the disease progresses. The diagnosis of primary progressive aphasia (as opposed to non-progressive aphasia, the main focus of this article) is based on the following inclusion and exclusion criteria by M. Marsel Mesulam, in 2001. Inclusion criteria are as follows: Difficulty with language that interferes with activities of daily living and aphasia is the most prominent symptom. Exclusion criteria are as follows: Other non-degenerative disease or medical disorder, psychiatric diagnosis, episodic memory, visual memory, and visuo-perceptual impairment, and, finally, initial behavioral disturbance. Aphasia involves one or more of the building blocks of language, phonemes, morphology, lexis, syntax, and semantics; and the deficits occur in various clusters or patterns across the spectrum. The degree of impairment varies across modalities, with written language often, but not always, more affected than spoken language. In some cases, understanding of language is relatively preserved, in others both production and understanding are affected. In addition to varied degrees of impairment in spoken and written language, any or more than one component of language can be affected. At the most severe end of the spectrum, a person with aphasia may be unable to communicate by either speech or writing and may be able to understand virtually nothing or only very limited social greetings. At the least severe end of the spectrum, the aphasic speaker may experience occasional word finding difficulties, often difficulties involving nouns; but unlike difficulties in recalling proper nouns in normal aging, word retrieval problems in mild aphasia includes other word classes. Descriptions of different clusters of language deficits have led to the notion of syndromes. Despite great variations in the condition, patterns of language deficits associated with different areas of brain damage have been influential in understanding language-brain relationships. Increasing sophistication in language assessment and neurological investigations are contributing to a greater, yet still incomplete understanding of language-brain relationships.

Article

Patrik Bye

Morpheme ordering is largely explainable in terms of syntactic/semantic scope, or the Mirror Principle, although there is a significant residue of cases that resist an explanation in these terms. The article, we look at some key examples of (apparent) deviant ordering and review the main ways that linguists have attempted to account for them. Approaches to the phenomenon fall into two broad types. The first relies on mechanisms we can term “morphological,” while the second looks instead to the resources of the ‘narrow’ syntax or phonology. One morphological approach involves a template that associates each class of morphemes in the word with a particular position. A well-known example is the Bantu CARP (Causative-Applicative-Reciprocal-Passive) template, which requires particular orders between morphemes to obtain irrespective of scope. A second approach builds on the intuition that the boundary or join between a morpheme and the base to which it attaches can vary in closeness or strength, where ‘strength’ can be interpreted in gradient or discrete terms. Under the gradient interpretation, affixes differ in parsability, or separability from the base; understood discretely, as in Lexical Morphology and Phonology, morphemes (or classes of morphemes) may attach at a deeper morphological layer to stems (the stronger join), or to words (weaker join), which are closer to the surface. Deviant orderings may then arise where an affix attaches at a morphological layer deeper than its scope would lead us to expect. An example is the marking of case and possession in Finnish nouns: case takes scope over possession, but the case suffix precedes the possessive suffix. Another morphological approach is represented by Distributed Morphology, which permits certain local reorderings once all syntactic operations have taken place. Such operations may target specific morphemes, or morphosyntactic features characterizing a class of morphemes. Agreement marking is an interesting case, since agreement features are bundled as syntactically unitary heads but may in certain languages be split morphologically into separate affixes. This means that in the case of split agreement marking, the relative order must be attributed to post-syntactic principles. Besides these morphological approaches, other researchers have emphasized the resources of the narrow syntax, in particular phrasal movement, as a means for dealing with many challenging cases of morpheme ordering. Still other cases of apparently deviant ordering may be analyzed as epiphenomena of phonological processes and constraint interaction as they apply to prespecified and/or underspecified lexical representations.

Article

Philip Rubin

Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis. Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.

Article

Alan Reed Libert

Artificial languages—languages which have been consciously designed—have been created for more than 900 years, although the number of them has increased considerably in recent decades, and by the early 21st century the total figure probably was in the thousands. There have been several goals behind their creation; the traditional one (which applies to some of the best-known artificial languages, including Esperanto) is to make international communication easier. Some other well-known artificial languages, such as Klingon, have been designed in connection with works of fiction. Still others are simply personal projects. A traditional way of classifying artificial languages involves the extent to which they make use of material from natural languages. Those artificial languages which are created mainly by taking material from one or more natural languages are called a posteriori languages (which again include well-known languages such as Esperanto), while those which do not use natural languages as sources are a priori languages (although many a posteriori languages have a limited amount of a priori material, and some a priori languages have a small number of a posteriori components). Between these two extremes are the mixed languages, which have large amounts of both a priori and a posteriori material. Artificial languages can also be classified typologically (as natural languages are) and by how and how much they have been used. Many linguists seem to be biased against research on artificial languages, although some major linguists of the past have been interested in them.

Article

Bilingualism/multilingualism is a natural phenomenon worldwide. Unwittingly, however, monolingualism has been used as a standard to characterize and define bilingualism/multilingualism in linguistic research. Such a conception led to a “fractional,” “irregular,” and “distorted” view of bilingualism, which is becoming rapidly outmoded in the light of multipronged, rapidly growing interdisciplinary research. This article presents a complex and holistic view of bilinguals and multilinguals on conceptual, theoretical, and pragmatic/applied grounds. In that process, it attempts to explain why bilinguals are not a mere composite of two monolinguals. If bilinguals were a clone of two monolinguals, the study of bilingualism would not merit any substantive consideration in order to come to grips with bilingualism; all one would have to do is focus on the study of a monolingual person. Interestingly, even the two bilinguals are not clones of each other, let alone bilinguals as a set of two monolinguals. This paper examines the multiple worlds of bilinguals in terms of their social life and social interaction. The intricate problem of defining and describing bilinguals is addressed; their process and end result of becoming bilinguals is explored alongside their verbal interactions and language organization in the brain. The role of social and political bilingualism is also explored as it interacts with individual bilingualism and global bilingualism (e.g., the issue of language endangerment and language death). Other central concepts such as individuals’ bilingual language attitudes, language choices, and consequences are addressed, which set bilinguals apart from monolinguals. Language acquisition is as much an innate, biological, as social phenomenon; these two complementary dimensions receive consideration in this article along with the educational issues of school performance by bilinguals. Is bilingualism a blessing or a curse? The linguistic and cognitive consequences of individual, societal, and political bilingualism are examined.

Article

Cedric Boeckx and Pedro Tiago Martins

All humans can acquire at least one natural language. Biolinguistics is the name given to the interdisciplinary enterprise that aims to unveil the biological bases of this unique capacity.

Article

Natalia Beliaeva

Blending is a type of word formation in which two or more words are merged into one so that the blended constituents are either clipped, or partially overlap. An example of a typical blend is brunch, in which the beginning of the word breakfast is joined with the ending of the word lunch. In many cases such as motel (motor + hotel) or blizzaster (blizzard + disaster) the constituents of a blend overlap at segments that are phonologically or graphically identical. In some blends, both constituents retain their form as a result of overlap, for example, stoption (stop + option). These examples illustrate only a handful of the variety of forms blends may take; more exotic examples include formations like Thankshallowistmas (Thanksgiving + Halloween + Christmas). The visual and audial amalgamation in blends is reflected on the semantic level. It is common to form blends meaning a combination or a product of two objects or phenomena, such as an animal breed (e.g., zorse, a breed of zebra and horse), an interlanguage variety (e.g., franglais, which is a French blend of français and anglais meaning a mixture of French and English languages), or other type of mix (e.g., a shress is a type of clothes having features of both a shirt and a dress). Blending as a word formation process can be regarded as a subtype of compounding because, like compounds, blends are formed of two (or sometimes more) content words and semantically either are hyponyms of one of their constituents, or exhibit some kind of paradigmatic relationships between the constituents. In contrast to compounds, however, the formation of blends is restricted by a number of phonological constraints given that the resulting formation is a single word. In particular, blends tend to be of the same length as the longest of their constituent words, and to preserve the main stress of one of their constituents. Certain regularities are also observed in terms of ordering of the words in a blend (e.g., shorter first, more frequent first), and in the position of the switch point, that is, where one blended word is cut off and switched to another (typically at the syllable boundary or at the onset/rime boundary). The regularities of blend formation can be related to the recognizability of the blended words.

Article

Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.

Article

Yingying Wang and Haihua Pan

Among Chinese reflexives, simple reflexive ziji ‘self’ is best known not only for its licensing of long-distance binding that violates Binding Condition A in the canonical Binding Theory, but also for its special properties such as the asymmetry of the blocking effect. Different researchers have made great efforts to explain such phenomena from a syntactic or a semantic-pragmatic perspective, though up to now there is still no consensus on what the mechanism really is. Besides being used as an anaphor, ziji can also be used as a generic pronoun and an intensifier. Moreover, Chinese has other simple reflexives such as zishen ‘self-body’ and benren ‘person proper’, and complex ones like ta-ziji ‘himself’ and ziji-benshen ‘self-self’. These reflexives again indicate the complexity of the anaphoric system of Chinese, which calls for further investigation so that we can have a better understanding of the diversity of the binding patterns in natural languages.

Article

Haihua Pan and Yuli Feng

Cross-linguistic data can add new insights to the development of semantic theories or even induce the shift of the research paradigm. The major topics in semantic studies such as bare noun denotation, quantification, degree semantics, polarity items, donkey anaphora and binding principles, long-distance reflexives, negation, tense and aspects, eventuality are all discussed by semanticists working on the Chinese language. The issues which are of particular interest include and are not limited to: (i) the denotation of Chinese bare nouns; (ii) categorization and quantificational mapping strategies of Chinese quantifier expressions (i.e., whether the behaviors of Chinese quantifier expressions fit into the dichotomy of A-Quantification and D-quantification); (iii) multiple uses of quantifier expressions (e.g., dou) and their implication on the inter-relation of semantic concepts like distributivity, scalarity, exclusiveness, exhaustivity, maximality, etc.; (iv) the interaction among universal adverbials and that between universal adverbials and various types of noun phrases, which may pose a challenge to the Principle of Compositionality; (v) the semantics of degree expressions in Chinese; (vi) the non-interrogative uses of wh-phrases in Chinese and their influence on the theories of polarity items, free choice items, and epistemic indefinites; (vii) how the concepts of E-type pronouns and D-type pronouns are manifested in the Chinese language and whether such pronoun interpretations correspond to specific sentence types; (viii) what devices Chinese adopts to locate time (i.e., does tense interpretation correspond to certain syntactic projections or it is solely determined by semantic information and pragmatic reasoning); (ix) how the interpretation of Chinese aspect markers can be captured by event structures, possible world semantics, and quantification; (x) how the long-distance binding of Chinese ziji ‘self’ and the blocking effect by first and second person pronouns can be accounted for by the existing theories of beliefs, attitude reports, and logophoricity; (xi) the distribution of various negation markers and their correspondence to the semantic properties of predicates with which they are combined; and (xii) whether Chinese topic-comment structures are constrained by both semantic and pragmatic factors or syntactic factors only.

Article

Daniel Recasens

The study of coarticulation—namely, the articulatory modification of a given speech sound arising from coproduction or overlap with neighboring sounds in the speech chain—has attracted the close attention of phonetic researchers for at least the last 60 years. Knowledge about coarticulatory patterns in speech should provide information about the planning mechanisms of consecutive consonants and vowels and the execution of coordinative articulatory structures during the production of those segmental units. Coarticulatory effects involve changes in articulatory displacement over time toward the left (anticipatory) or the right (carryover) of the trigger, and their typology and extent depend on the articulator under investigation (lip, velum, tongue, jaw, larynx) and the articulatory characteristics of the individual consonants and vowels, as well as nonsegmental factors such as speech rate, stress, and language. A challenge for studying coarticulation is that different speakers may use different coarticulatory mechanisms when producing a given phonemic sequence and they also use coarticulatory information differently for phonemic identification in perception. More knowledge about all these research issues should contribute to a deeper understanding of coarticulation deficits in speakers with speech disorders, how the ability to coarticulate develops from childhood to adulthood, and the extent to which the failure to compensate for coarticulatory effects may give rise to sound change.

Article

Matthew B. Winn and Peggy B. Nelson

Cochlear implants (CIs) are the most successful sensory implant in history, restoring the sensation of sound to thousands of persons who have severe to profound hearing loss. Implants do not recreate acoustic sound as most of us know it, but they instead convey a rough representation of the temporal envelope of signals. This sparse signal, derived from the envelopes of narrowband frequency filters, is sufficient for enabling speech understanding in quiet environments for those who lose hearing as adults and is enough for most children to develop spoken language skills. The variability between users is huge, however, and is only partially understood. CIs provide acoustic information that is sufficient for the recognition of some aspects of spoken language, especially information that can be conveyed by temporal patterns, such as syllable timing, consonant voicing, and manner of articulation. They are insufficient for conveying pitch cues and separating speech from noise. There is a great need for improving our understanding of functional outcomes of CI success beyond measuring percent correct for word and sentence recognitions. Moreover, greater understanding of the variability experienced by children, especially children and families from various social and cultural backgrounds, is of paramount importance. Future developments will no doubt expand the use of this remarkable device.

Article

Pius ten Hacken

Compounding is a word formation process based on the combination of lexical elements (words or stems). In the theoretical literature, compounding is discussed controversially, and the disagreement also concerns basic issues. In the study of compounding, the questions guiding research can be grouped into four main areas, labeled here as delimitation, classification, formation, and interpretation. Depending on the perspective taken in the research, some of these may be highlighted or backgrounded. In the delimitation of compounding, one question is how important it is to be able to determine for each expression unambiguously whether it is a compound or not. Compounding borders on syntax and on affixation. In some theoretical frameworks, it is not a problem to have more typical and less typical instances, without a precise boundary between them. However, if, for instance, word formation and syntax are strictly separated and compounding is in word formation, it is crucial to draw this borderline precisely. Another question is which types of criteria should be used to distinguish compounding from other phenomena. Criteria based on form, on syntactic properties, and on meaning have been used. In all cases, it is also controversial whether such criteria should be applied crosslinguistically. In the classification of compounds, the question of how important the distinction between the classes is for the theory in which they are used poses itself in much the same way as the corresponding question for the delimitation. A common classification uses headedness as a basis. Other criteria are based on the forms of the elements that are combined (e.g., stem vs. word) or on the semantic relationship between the components. Again, whether these criteria can and should be applied crosslinguistically is controversial. The issue of the formation rules for compounds is particularly prominent in frameworks that emphasize form-based properties of compounding. Rewrite rules for compounding have been proposed, generalizations over the selection of the input form (stem or word) and of linking elements, and rules for stress assignment. Compounds are generally thought of as consisting of two components, although these components may consist of more than one element themselves. For some types of compounds with three or more components, for example copulative compounds, a nonbinary structure has been proposed. The question of interpretation can be approached from two opposite perspectives. In a semasiological perspective, the meaning of a compound emerges from the interpretation of a given form. In an onomasiological perspective, the meaning precedes the formation in the sense that a form is selected to name a particular concept. The central question in the interpretation of compounds is how to determine the relationship between the two components. The range of possible interpretations can be constrained by the rules of compounding, by the semantics of the components, and by the context of use. A much-debated question concerns the relative importance of these factors.

Article

A computational learner needs three things: Data to learn from, a class of representations to acquire, and a way to get from one to the other. Language acquisition is a very particular learning setting that can be defined in terms of the input (the child’s early linguistic experience) and the output (a grammar capable of generating a language very similar to the input). The input is infamously impoverished. As it relates to morphology, the vast majority of potential forms are never attested in the input, and those that are attested follow an extremely skewed frequency distribution. Learners nevertheless manage to acquire most details of their native morphologies after only a few years of input. That said, acquisition is not instantaneous nor is it error-free. Children do make mistakes, and they do so in predictable ways which provide insights into their grammars and learning processes. The most elucidating computational model of morphology learning from the perspective of a linguist is one that learns morphology like a child does, that is, on child-like input and along a child-like developmental path. This article focuses on clarifying those aspects of morphology acquisition that should go into such an elucidating a computational model. Section 1 describes the input with a focus on child-directed speech corpora and input sparsity. Section 2 discusses representations with focuses on productivity, developmental paths, and formal learnability. Section 3 surveys the range of learning tasks that guide research in computational linguistics and NLP with special focus on how they relate to the acquisition setting. The conclusion in Section 4 presents a summary of morphology acquisition as a learning problem with Table 4 highlighting the key takeaways of this article.

Article

Spanish and Portuguese are in contact along the extensive border of Brazil and its neighboring Spanish-speaking countries. Transnational interactions in some border communities allow for ephemeral language accommodations that occur when speakers of both languages communicate during social interactions and business transactions, facilitated by the lack of border control and similarities between the languages. A different situation is found in northern Uruguay, where Spanish and Portuguese are spoken in several border towns, presenting a case of stable and prolonged bilingualism that has allowed for the emergence of language contact phenomena such as lexical borrowings, code-switching, and structural convergence to a variable extent. However, due to urbanization and the presence of monolingual dialects in the surrounding communities, Portuguese and Spanish have not converged structurally in a single mixed code in urban areas and present instead clear continuities with the monolingual counterparts.

Article

Jack Sidnell

Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases. Four important domains of research within conversation analysis are turn-taking, repair, action formation and ascription, and action sequencing.

Article

Grant Goodall

The term coordination refers to the juxtaposition of two or more conjuncts often linked by a conjunction such as and or or. The conjuncts (e.g., our friend and your teacher in Our friend and your teacher sent greetings) may be words or phrases of any type. They are a defining property of coordination, while the presence or absence of a conjunction depends on the specifics of the particular language. As a general phenomenon, coordination differs from subordination in that the conjuncts are typically symmetric in many ways: they often belong to like syntactic categories, and if nominal, each carries the same case. Additionally, if there is extraction, this must typically be out of all conjuncts in parallel, a phenomenon known as Across-the-Board extraction. Extraction of a single conjunct, or out of a single conjunct, is prohibited by the Coordinate Structure Constraint. Despite this overall symmetry, coordination does sometimes behave in an asymmetric fashion. Under certain circumstances, the conjuncts may be of unlike categories or extraction may occur out of one conjunct, but not another, thus yielding apparent violations of the Coordinate Structure Constraint. In addition, case and agreement show a wide range of complex and sometimes asymmetric behavior cross-linguistically. This tension between the symmetric and asymmetric properties of coordination is one of the reasons that coordination has remained an interesting analytical puzzle for many decades. Within the general area of coordination, a number of specific sentence types have generated much interest. One is Gapping, in which two sentences are conjoined, but material (often the verb) is missing from the middle of the second conjunct, as in Mary ate beans and John _ potatoes. Another is Right Node Raising, in which shared material from the right edge of sentential conjuncts is placed in the right periphery of the entire sentence, as in The chefs prepared __ and the customers ate __ [a very elaborately constructed dessert]. Finally, some languages have a phenomenon known as comitative coordination, in which a verb has two arguments, one morphologically plural and the other comitative (e.g., with the preposition with), but the plural argument may be understood as singular. English does not have this phenomenon, but if it did, a sentence like We went to the movies with John could be understood as John and I went to the movies.

Article

Pieter Muysken

Creole languages have a curious status in linguistics, and at the same time they often have very low prestige in the societies in which they are spoken. These two facts may be related, in part because they circle around notions such as “derived from” or “simplified” instead of “original.” Rather than simply taking the notion of “creole” as a given and trying to account for its properties and origin, this essay tries to explore the ways scholars have dealt with creoles. This involves, in particular, trying to see whether we can define “creoles” as a meaningful class of languages. There is a canonical list of languages that most specialists would not hesitate to call creoles, but the boundaries of the list and the criteria for being listed are vague. It also becomes difficult to distinguish sharply between pidgins and creoles, and likewise the boundaries between some languages claimed to be creoles and their lexifiers are rather vague. Several possible criteria to distinguish creoles will be discussed. Simply defining them as languages of which we know the point of birth may be a necessary, but not sufficient, criterion. Displacement is also an important criterion, necessary but not sufficient. Mixture is often characteristic of creoles, but not crucial, it is argued. Essential in any case is substantial restructuring of some lexifier language, which may take the form of morphosyntactic simplification, but it is dangerous to assume that simplification always has the same outcome. The combination of these criteria—time of genesis, displacement, mixture, restructuring—contributes to the status of a language as creole, but “creole” is far from a unified notion. There turn out to be several types of creoles, and then a whole bunch of creole-like languages, and they differ in the way these criteria are combined with respect to them. Thus the proposal is made here to stop looking at creoles as a separate class, but take them as special cases of the general phenomenon that the way languages emerge and are used to a considerable extent determines their properties. This calls for a new, socially informed typology of languages, which will involve all kinds of different types of languages, including pidgins and creoles.