321-340 of 349 Results

Article

Pavel Caha

The term syncretism refers to a situation where two distinct morphosyntactic categories are expressed in the same way. For instance, in English, first and third person pronouns distinguish singular from plural (I vs. we, he/she/it vs. them), but the second person pronoun (you) doesn’t. Such facts are traditionally understood in a way that English grammar distinguishes between the singular and plural in all persons. However, in the second person, the two distinct meanings are expressed the same, and the form you is understood as a form syncretic between the two different grammatical meanings. It is important to note that while the two meanings are different, they are also related: both instances of you refer to the addressee. They differ in whether they refer just to the addressee or to a group including the addressee and someone else, as depicted here. a.you (sg) = addressee b.you (pl) = addressee + others The idea that syncretism reflects meaning similarity is what makes its study interesting; a lot of research has been dedicated to figuring out the reasons why two distinct categories are marked the same. There are a number of approaches to the issue of how relatedness in meaning is to be modeled. An old idea, going back to Sanskrit grammarians, is to arrange the syncretic cells of a paradigm in such a way so that the syncretic cells would always be adjacent. Modern approaches call such arrangements geometric spaces (McCreight & Chvany, 1991) or semantic maps (Haspelmath, 2003), with the goal to depict meaning relatedness as a spatial proximity in a conceptual space. A different idea is pursued in approaches based on decomposition into discrete meaning components called features (Jakobson, 1962). Both of these approaches acknowledge the existence of two different meanings, which are related. However, there are two additional logical options to the issue of syncretism. First, one may adopt the position that the two paradigm cells correspond to a single abstract meaning, and that what appear to be different meanings/functions arises from the interaction between the abstract meaning and the specific context of use (see, for instance, Kayne, 2008 or Manzini & Savoia, 2011). Second, it could be that there are simply two different meanings expressed by two different markers, which accidentally happen to have the same phonology (like the English two and too). The three approaches are mutually contradictory only for a single phenomenon, but each of them may be correct for a different set of cases.

Article

Ur Shlonsky and Giuliano Bocci

Syntactic cartography emerged in the 1990s as a result of the growing consensus in the field about the central role played by functional elements and by morphosyntactic features in syntax. The declared aim of this research direction is to draw maps of the structures of syntactic constituents, characterize their functional structure, and study the array and hierarchy of syntactically relevant features. Syntactic cartography has made significant empirical discoveries, and its methodology has been very influential in research in comparative syntax and morphosyntax. A central theme in current cartographic research concerns the source of the emerging featural/structural hierarchies. The idea that the functional hierarchy is not a primitive of Universal Grammar but derives from other principles does not undermine the scientific relevance of the study of the cartographic structures. On the contrary, the cartographic research aims at providing empirical evidence that may help answer these questions about the source of the hierarchy and shed light on how the computational principles and requirements of the interface with sound and meaning interact.

Article

A root is a fundamental minimal unit in words. Some languages do not allow their roots to appear on their own, as in the Semitic languages where roots consist of consonant clusters that become stems or words by virtue of vowel insertion. Other languages appear to allow roots to surface without any additional morphology, as in English car. Roots are typically distinguished from affixes in that affixes need a host, although this varies within different theories. Traditionally roots have belonged to the domain of morphology. More recently, though, new theories have emerged according to which words are decomposed and subject to the same principles as sentences. That makes roots a fundamental building block of sentences, unlike words. Contemporary syntactic theories of roots hold that they have little if any grammatical information, which raises the question of how they acquire their seemingly grammatical properties. A central issue has revolved around whether roots have a lexical category inherently or whether they are given a lexical category in some other way. Two main theories are distributed morphology and the exoskeletal approach to grammar. The former holds that roots merge with categorizers in the grammar: a root combined with a nominal categorizer becomes a noun, and a root combined with a verbal categorizer becomes a verb. On the latter approach, it is argued that roots are inserted into syntactic structures which carry the relevant category, meaning that the syntactic environment is created before roots are inserted into the structure. The two views make different predictions and differ in particular in their view of the status of empty categorizers.

Article

Peter Svenonius

Syntactic features are formal properties of syntactic objects which determine how they behave with respect to syntactic constraints and operations (such as selection, licensing, agreement, and movement). Syntactic features can be contrasted with properties which are purely phonological, morphological, or semantic, but many features are relevant both to syntax and morphology, or to syntax and semantics, or to all three components. The formal theory of syntactic features builds on the theory of phonological features, and normally takes morphosyntactic features (those expressed in morphology) to be the central case, with other, possibly more abstract features being modeled on the morphosyntactic ones. Many aspects of the formal nature of syntactic features are currently unresolved. Some traditions (such as HPSG) make use of rich feature structures as an analytic tool, while others (such as Minimalism) pursue simplicity in feature structures in the interest of descriptive restrictiveness. Nevertheless, features are essential to all explicit analyses.

Article

Heidi Harley and Shigeru Miyagawa

Ditransitive predicates select for two internal arguments, and hence minimally entail the participation of three entities in the event described by the verb. Canonical ditransitive verbs include give, show, and teach; in each case, the verb requires an agent (a giver, shower, or teacher, respectively), a theme (the thing given, shown, or taught), and a goal (the recipient, viewer, or student). The property of requiring two internal arguments makes ditransitive verbs syntactically unique. Selection in generative grammar is often modeled as syntactic sisterhood, so ditransitive verbs immediately raise the question of whether a verb may have two sisters, requiring a ternary-branching structure, or whether one of the two internal arguments is not in a sisterhood relation with the verb. Another important property of English ditransitive constructions is the two syntactic structures associated with them. In the so-called “double object construction,” or DOC, the goal and theme both are simple NPs and appear following the verb in the order V-goal-theme. In the “dative construction,” the goal is a PP rather than an NP and follows the theme in the order V-theme-to goal. Many ditransitive verbs allow both structures (e.g., give John a book/give a book to John). Some verbs are restricted to appear only in one or the other (e.g. demonstrate a technique to the class/*demonstrate the class a technique; cost John $20/*cost $20 to John). For verbs which allow both structures, there can be slightly different interpretations available for each. Crosslinguistic results reveal that the underlying structural distinctions and their interpretive correlates are pervasive, even in the face of significant surface differences between languages. The detailed analysis of these questions has led to considerable progress in generative syntax. For example, the discovery of the hierarchical relationship between the first and second arguments of a ditransitive has been key in motivating the adoption of binary branching and the vP hypothesis. Many outstanding questions remain, however, and the syntactic encoding of ditransitivity continues to inform the development of grammatical theory.

Article

Sónia Frota and Marina Vigário

The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other. Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free. The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure. As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.

Article

The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.

Article

This article introduces two phenomena that are studied within the domain of templatic morphology—clippings and word-and-pattern morphology, where the latter is usually associated with Semitic morphology. In both cases, the words are of invariant shape, sharing a prosodic structure defined in terms of number of syllables. This prosodic template, being the core of the word structure, is often accompanied with one or more of the following properties: syllable structure, vocalic pattern, and an affix. The data in this article, drawn from different languages, display the various ways in which these structural properties are combined to determine the surface structure of the word. The invariant shape of Japanese clippings (e.g., suto ← sutoraiki ‘strike’) consists of a prosodic template alone, while that of English hypocoristics (e.g., Trudy ← Gertrude) consists of a prosodic template plus the suffix -i. The Arabic verb classes, such as class-I (e.g., sakan ‘to live’) and class-II (e.g., misek ‘to hold’), display a prosodic template plus a vocalic pattern, and the Hebrew verb class-III (e.g., hivdil ‘to distinguish’) displays a prosodic template, a vocalic pattern and a prefix. Given these structural properties, the relation between a base and its derived form is expressed in terms of stem modification, which involves truncation (for the prosodic template) and melodic overwriting (for the vocalic pattern). The discussion in this article suggests that templatic morphology is not limited to a particular lexicon type – core or periphery, but it displays different degrees of restrictiveness.

Article

Marianne Mithun

Distinctions of time are among the most common notions expressed in morphology cross-linguistically. But the inventories of distinctions marked in individual languages are also varied. Some languages have few if any morphological markers pertaining to time, while others have extensive sets. Certain categories do recur pervasively across languages, but even these can vary subtly or even substantially in their uses. And they may be optional or obligatory. The grammar of time is traditionally divided into two domains: tense and aspect. Tense locates situations in time. Tense markers place them along a timeline with respect to some point of reference, a deictic center. The most common reference point is the moment of speech. Many languages have just three tense categories: past for situations before the time of speech, present for those overlapping with the moment of speech, and future for those subsequent to the moment of speech. But many languages have no morphological tense, some have just two categories, and some have many more. In some languages, morphological distinctions correspond fairly closely to identifiable times. There may, for example, be a today (hodiernal) past that contrasts with a yesterday (hesternal) past. In other languages, tense distinctions are more fluid. A recent past might be interpreted as ‘some time earlier today’ for a sentence meaning ‘I ate a banana’, but ‘within the last few months’ for a sentence meaning ‘I returned from Africa’. Languages also vary in the mobility of the deictic center. In some languages tense distinctions are systematically calibrated with respect to the moment of speaking. In others, the deictic center may shift. It may be established by the matrix clause in a complex sentence. Or it may be established by a larger topic of discussion. Tense is most often a verbal category, because verbs generally portray the most dynamic elements of a situation, but a number of languages distinguish tense on nouns as well. Aspect characterizes the internal temporal structure of a situation. There may be different forms of a verb ‘eat’, for example, in sentences meaning ‘I ate lamb chops’, ‘I was eating lamb chops’, and ‘I used to eat lamb chops’, though all are past tense. They may pick out one phase of the situation, with different forms for ‘I began to eat’, ‘I was eating’, and ‘I ate it up’. They may make finer distinctions, with different forms for ‘I took a bite’, ‘I nibbled’, and ‘I kept eating’. Morphological aspect distinctions are usually marked on verbs, but in some languages they can be marked on nominals as well. In some languages, there is a clear separation between the two: tense is expressed in one part of the morphology, and aspect in another. But often a single marker conveys both: a single suffix might mark both past tense and progressive aspect in a sentence meaning ‘I was eating’, for example. A tense distinction may be made only in a particular aspect, and/or a certain aspect distinction marked only in a particular tense. Like other areas of grammar, tense and aspect systems are constantly evolving. The meanings of markers can shift over time, as speakers apply them to new contexts, and as new markers enter the system, taking over some of their functions. Markers can shift for example from aspect to tense, or from derivation to inflection. The gradualness of such developments underlies the cross-linguistic differences we find in tense and aspect categories. There is a rich literature on tense and aspect. As more is learned about the inventories of categories that exist in individual languages and the ways speakers deploy them, theoretical models continue to grow in sophistication.

Article

Ana Deumert

The concept of Africa requires reflection: what does it mean to study a social phenomenon “in Africa”? Technology use in Africa is complex and diverse, showing various degrees of access across the continent (and in the Diaspora, and digital social inequalities—which are part and parcel of the political economy of communication—shape digital engagement. The rise of mobile phones, in particular, has enabled the emergence of technologically mediated literacies, text-messaging among them. Text-messaging is defined not only by a particular mode of communication (typically written on mobile phones, visual, digital), but it also favors particular topics (intimate, relational, sociable, ludic) and ways of writing (short, non-standard texts that are creative as well as multilingual). The genre of text-messaging thus includes not only short message service (SMS) and (mobile) instant-messaging (which one might call prototypical one-to-one text messages), but also Twitter, an application that, like texting, favors brevity of expression and allows for one-to-many conversations. Access to Twitter is still limited for many Africans, but as ownership of smartphones is growing, so is Twitter use, and the African “Twittersphere” is emerging as an important pan-African space. At times, discussions are very local (as on Ghanaian Twitter), at other times regional (East African Twitter) or global (African Twitter and Black Twitter); all these are emic, folksonomic terms, assigned and discussed by users. Although former colonial languages, especially English, dominate in many prototypical text messages and on Twitter, the genre also provides important opportunities for writing in African languages. The choices made in the digital space echo the well-known debate between Chinua Achebe and Ngũgĩ wa Thiong’o: the Africanization of the former colonial languages versus writing in African languages. In addition, digital writers engage in multilingual writing, combining diverse languages in one text, and thus offer new ways of writing locally as well as shaping a digitally-mediated pan-African voice that draws on global strategies as well as local meaning.

Article

Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.

Article

Theme  

Eva Hajičová

In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.

Article

Paul de Lacy

Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain. The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production. There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries. Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them. Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them. The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.

Article

Amalia Arvaniti

Prosody is an umbrella term used to cover a variety of interconnected and interacting phenomena, namely stress, rhythm, phrasing, and intonation. The phonetic expression of prosody relies on a number of parameters, including duration, amplitude, and fundamental frequency (F0). The same parameters are also used to encode lexical contrasts (such as tone), as well as paralinguistic phenomena (such as anger, boredom, and excitement). Further, the exact function and organization of the phonetic parameters used for prosody differ across languages. These considerations make it imperative to distinguish the linguistic phenomena that make up prosody from their phonetic exponents, and similarly to distinguish between the linguistic and paralinguistic uses of the latter. A comprehensive understanding of prosody relies on the idea that speech is prosodically organized into phrasal constituents, the edges of which are phonetically marked in a number of ways, for example, by articulatory strengthening in the beginning and lengthening at the end. Phrases are also internally organized either by stress, that is around syllables that are more salient relative to others (as in English and Spanish), or by the repetition of a relatively stable tonal pattern over short phrases (as in Korean, Japanese, and French). Both types of organization give rise to rhythm, the perception of speech as consisting of groups of a similar and repetitive pattern. Tonal specification over phrases is also used for intonation purposes, that is, to mark phrasal boundaries, and express information structure and pragmatic meaning. Taken together, the components of prosody help with the organization and planning of speech, while prosodic cues are used by listeners during both language acquisition and speech processing. Importantly, prosody does not operate independently of segments; rather, it profoundly affects segment realization, making the incorporation of an understanding of prosody into experimental design essential for most phonetic research.

Article

Tone  

Bert Remijsen

When the phonological form of a morpheme—a unit of meaning that cannot be decomposed further into smaller units of meaning—involves a particular melodic pattern as part of its sound shape, this morpheme is specified for tone. In view of this definition, phrase- and utterance-level melodies—also known as intonation—are not to be interpreted as instances of tone. That is, whereas the question “Tomorrow?” may be uttered with a rising melody, this melody is not tone, because it is not a part of the lexical specification of the morpheme tomorrow. A language that presents morphemes that are specified with specific melodies is called a tone language. It is not the case that in a tone language every morpheme, content word, or syllable would be specified for tone. Tonal specification can be highly restricted within the lexicon. Examples of such sparsely specified tone languages include Swedish, Japanese, and Ekagi (a language spoken in the Indonesian part of New Guinea); in these languages, only some syllables in some words are specified for tone. There are also tone languages where each and every syllable of each and every word has a specification. Vietnamese and Shilluk (a language spoken in South Sudan) illustrate this configuration. Tone languages also vary greatly in terms of the inventory of phonological tone forms. The smallest possible inventory contrasts one specification with the absence of specification. But there are also tone languages with eight or more distinctive tone categories. The physical (acoustic) realization of the tone categories is primarily fundamental frequency (F0), which is perceived as pitch. However, often other phonetic correlates are also involved, in particular voice quality. Tone plays a prominent role in the study of phonology because of its structural complexity. That is, in many languages, the way a tone surfaces is conditioned by factors such as the segmental composition of the morpheme, the tonal specifications of surrounding constituents, morphosyntax, and intonation. On top of this, tone is diachronically unstable. This means that, when a language has tone, we can expect to find considerable variation between dialects, and more of it than in relation to other parts of the sound system.

Article

Alexis Michaud and Bonny Sands

Tonogenesis is the development of distinctive tone from earlier non-tonal contrasts. A well-understood case is Vietnamese (similar in its essentials to that of Chinese and many languages of the Tai-Kadai and Hmong-Mien language families), where the loss of final laryngeal consonants led to the creation of three tones, and the tones later multiplied as voicing oppositions on initial consonants waned. This is by no means the only attested diachronic scenario, however. Besides well-known cases of tonogenesis in East Asia, this survey includes discussions of less well-known cases of tonogenesis from language families including Athabaskan, Chadic, Khoe and Niger-Congo. There is tonogenetic potential in various series of phonemes: glottalized versus plain consonants, unvoiced versus voiced, aspirated versus unaspirated, geminates versus simple (and, more generally, tense versus lax), and even among vowels, whose intrinsic fundamental frequency can transphonologize to tone. We draw attention to tonogenetic triggers that are not so well-known, such as [+ATR] vowels, aspirates and morphotonological alternations. The ways in which these common phonetic precursors to tone play out in a given language depend on phonological factors, as well as on other dimensions of a language’s structure and on patterns of language contact, resulting in a great diversity of evolutionary paths in tone systems. In some language families (such as Niger-Congo and Khoe), recent tonal developments are increasingly well understood, but working out the origin of the earliest tonal contrasts (which are likely to date back thousands of years earlier than tonogenesis among Sino-Tibetan languages, for instance) remains a mid- to long-term research goal for comparative-historical research.

Article

Despite their large demographic size, intra-continental African migrations have hardly been taken into account in the theorizing on migration in transnational studies and related fields. Research questions have been framed predominantly from a South-to-North perspective on population movements. This may be a consequence of the fact that the extent and complexity of modern population movements and contacts within Africa are hard to assess, owing mainly to lack of reliable data. For sociolinguists the challenge is even greater, partly because of the spotty knowledge of linguistic diversity in the continent and the scarcity of adequate sociolinguistic descriptions of the ways in which Africans manage their language repertoires. Despite these limitations, a sociolinguistics of intra-continental African migrations will contribute significantly to a better understanding of the conditions, nature, and periodicity of population contacts and interactional dynamics. It will help explain why geographic mobility entails reshaping sociocultural practices, including the language repertoires of both the migrants and the people they come in contact with. Moreover, the peculiarity of African economies, which rely heavily on informal non-institutionalized practices, prompts a rethinking of assumptions regarding the acquisition of the host country’s language(s) as the primary facilitator of the migrants’ socioeconomic inclusion. A sociolinguistic understanding of migrations within Africa can help to formulate new questions and enrich the complex pictures that the study of other parts of the world has already shaped.

Article

Wolf Dietrich

“Tupian” is a common term applied by linguists to a linguistic stock of seven families spread across great parts of South America. Tupian languages share a large number of structural and morphological similarities which make genetic relationship very probable. Four families (Arikém, Mondé, Tuparí, and Raramarama-Poruborá) are still limited to the Madeira-Guaporé region in Brazil, considered by some scholars to be the Tupí homeland. Other families and branches would have migrated, in ancient times, down the Amazon (Mundurukú, Mawé) and up the Xingú River (Juruna, Awetí). Only the Tupí-Guarani branch, which makes up about 40 living languages, mainly spread to the south. Two Tupí-Guaraní languages played an important part in the Portuguese and Spanish colonisation of South America, Tupinambá on the Brazilian coast and Guaraní in colonial Paraguay. In the early 21st century, Guaraní is spoken by more than six million non-Indian people in Paraguay and in adjacent parts of Argentina and Brazil. Tupí-Guaraní (TG) is an artificial term used by linguists to denominate the family composed by eight subgroups of languages, one of them being the Guaraní subgroup and the other one the extinct Tupinambá and its varieties. Important phonological characteristics of Tupian languages are nasality and the occurrence of a high central vowel /ɨ/, a glottal stop /ʔ/, and final consonants, especially plosives in coda position. Nasality seems to be a common characteristic of all branches of the family. Most of them show phenomena such as nasal harmony, also called nasal assimilation or regressive nasalization by some scholars. Tupian languages have a rich morphology expressed mainly by suffixes and prefixes, though particles are also important to express grammatical categories. Verbal morphology is characterized by generally rich devices of valence-changing formations. Relational inflection is one of the most striking phenomena of TG nominal phrases. It allows marking the determination of a noun by a preceding adjunct, its syntactical transformation into a nominal predicate, or the absence of any relation. Relational inflection partly occurs also in other branches and families than Tupí-Guaraní. Verbal person marking is realized by prefixing in most languages; some languages of the Tuparí and Juruna family, however, use only free pronouns. Tupian syntax is based on the predication of both verbs and nouns. Subordinate clauses, such as relative clauses, are produced by nominalization, while adverbial clauses are formed by specific particles or postpositions on the predicate. Traditional word order is SOV.

Article

Stergios Chatzikyriakidis and Robin Cooper

Type theory is a regime for classifying objects (including events) into categories called types. It was originally designed in order to overcome problems relating to the foundations of mathematics relating to Russell’s paradox. It has made an immense contribution to the study of logic and computer science and has also played a central role in formal semantics for natural languages since the initial work of Richard Montague building on the typed λ-calculus. More recently, type theories following in the tradition created by Per Martin-Löf have presented an important alternative to Montague’s type theory for semantic analysis. These more modern type theories yield a rich collection of types which take on a role of representing semantic content rather than simply structuring the universe in order to avoid paradoxes.

Article

Holger Diessel

Throughout the 20th century, structuralist and generative linguists have argued that the study of the language system (langue, competence) must be separated from the study of language use (parole, performance), but this view of language has been called into question by usage-based linguists who have argued that the structure and organization of a speaker’s linguistic knowledge is the product of language use or performance. On this account, language is seen as a dynamic system of fluid categories and flexible constraints that are constantly restructured and reorganized under the pressure of domain-general cognitive processes that are not only involved in the use of language but also in other cognitive phenomena such as vision and (joint) attention. The general goal of usage-based linguistics is to develop a framework for the analysis of the emergence of linguistic structure and meaning. In order to understand the dynamics of the language system, usage-based linguists study how languages evolve, both in history and language acquisition. One aspect that plays an important role in this approach is frequency of occurrence. As frequency strengthens the representation of linguistic elements in memory, it facilitates the activation and processing of words, categories, and constructions, which in turn can have long-lasting effects on the development and organization of the linguistic system. A second aspect that has been very prominent in the usage-based study of grammar concerns the relationship between lexical and structural knowledge. Since abstract representations of linguistic structure are derived from language users’ experience with concrete linguistic tokens, grammatical patterns are generally associated with particular lexical expressions.