You are looking at 221-240 of 253 articles
Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler
Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.
Diane Brentari, Jordan Fenlon, and Kearsy Cormier
Sign language phonology is the abstract grammatical component where primitive structural units are combined to create an infinite number of meaningful utterances. Although the notion of phonology is traditionally based on sound systems, phonology also includes the equivalent component of the grammar in sign languages, because it is tied to the grammatical organization, and not to particular content. This definition of phonology helps us see that the term covers all phenomena organized by constituents such as the syllable, the phonological word, and the higher-level prosodic units, as well as the structural primitives such as features, timing units, and autosegmental tiers, and it does not matter if the content is vocal or manual. Therefore, the units of sign language phonology and their phonotactics provide opportunities to observe the interaction between phonology and other components of the grammar in a different communication channel, or modality. This comparison allows us to better understand how the modality of a language influences its phonological system.
Klaus Beyer and Henning Schreiber
The Social Network Analysis approach (SNA), also known as sociometrics or actor-network analysis, investigates social structure on the basis of empirically recorded social ties between actors. It thereby aims to explain e.g. the processes of flow of information, spreading of innovations, or even pathogens throughout the network by actor roles and their relative positions in the network based on quantitative and qualitative analyses. While the approach has a strong mathematical and statistical component, the identification of pertinent social ties also requires a strong ethnographic background. With regard to social categorization, SNA is well suited as a bootstrapping technique for highly dynamic communities and under-documented contexts. Currently, SNA is widely applied in various academic fields. For sociolinguists, it offers a framework for explaining the patterning of linguistic variation and mechanisms of language change in a given speech community.
The social tie perspective developed around 1940, in the field of sociology and social anthropology based on the ideas of Simmel, and was applied later in fields such as innovation theory. In sociolinguistics, it is strongly connected to the seminal work of Lesley and James Milroy and their Belfast studies (1978, 1985). These authors demonstrate that synchronic speaker variation is not only governed by broad societal categories but is also a function of communicative interaction between speakers. They argue that the high level of resistance against linguistic change in the studied community is a result of strong and multiplex ties between the actors. Their approach has been followed by various authors, including Gal, Lippi-Green, and Labov, and discussed for a variety of settings; most of them, however, are located in the Western world.
The methodological advantages could make SNA the preferred framework for variation studies in Africa due to the prevailing dynamic multilingual conditions, often on the backdrop of less standardized languages. However, rather few studies using SNA as a framework have yet been conducted. This is possibly due to the quite demanding methodological requirements, the overall effort, and the often highly complex linguistic backgrounds. A further potential obstacle is the pace of theoretical development in SNA. Since its introduction to sociolinguistics, various new measures and statistical techniques have been developed by the fast growing SNA community. Receiving this vast amount of recent literature and testing new concepts is likewise a challenge for the application of SNA in sociolinguistics.
Nevertheless, the overall methodological effort of SNA has been much reduced by the advancements in recording technology, data processing, and the introduction of SNA software (UCINET) and packages for network statistics in R (‘sna’). In the field of African sociolinguistics, a more recent version of SNA has been implemented in a study on contact-induced variation and change in Pana and Samo, two speech communities in the Northwest of Burkina Faso. Moreover, further enhanced applications are on the way for Senegal and Cameroon, and even more applications in the field of African languages are to be expected.
The study of sociolinguistics constitutes a vast and complex topic that has yielded an extensive and multifaceted body of scholarship. Language is fundamentally at work in how we operate as individuals, as members of various communities, and within cultures and societies. As speakers, we learn not only the structure of a given language; we also learn cultural and social norms about how to use language and what content to communicate. We use language to navigate expectations, to engage in interpersonal interaction, and to go along with or to speak out against social structures and systems.
Sociolinguistics aims to study the effects of language use within and upon societies and the reciprocal effects of social organization and social contexts on language use. In contemporary theoretical perspectives, sociolinguists view language and society as being mutually constitutive: each influences the other in ways that are inseparable and complex. Language is imbued with and carries social, cultural, and personal meaning. Through the use of linguistic markers, speakers symbolically define self and society. Simply put, language is not merely content; rather, it is something that we do, and it affects how we act and interact as social beings in the world.
Language is a social product with rich variation along individual, community, cultural, and societal lines. For this reason, context matters in sociolinguistic research. Social categories such as gender, race/ethnicity, social class, nationality, etc., are socially constructed, with considerable variation within and among categories. Attributes such as “female” or “upper class” do not have universal effects on linguistic behavior, and sociolinguists cannot assume that the most interesting linguistic differences will be between groups of speakers in any simple, binary fashion. Sociolinguistic research thus aims to explore social and linguistic diversity in order to better understand how we, as speakers, use language to inhabit and negotiate our many personal, cultural, and social identities and roles.
Spanish in Contact with South-American Languages, with Special Emphasis on Andean and Paraguayan Spanish
The effect of indigenous languages of South America on Spanish is strongest in the lexicon (especially with toponyms, zoonyms, and phytonyms) and identifiable, but much more modest, in phonetics/phonology (e.g., vowel variability and reduction and nasalization) and morphosyntax (e.g., the different use of selected verb forms and constituent order). The phenomena called Media Lengua and Yopará differ from this picture in that the former roughly consists of a Spanish lexicon combined with Quechua grammar, while the latter is a fluid Guaraní-based system with numerous borrowings from Spanish. The effects of contact are socially and areally variable, with low-prestige, typically rural, varieties of South American Spanish showing the most significant systemic impact, while high-prestige, typically urban, varieties (including the national standards) show little more than lexical borrowings in the semantic fields mentioned. This result is hardly surprising, due to historical/sociolinguistic factors (which often led to situations of dominance and language shift) and to the typological dissimilarities between Spanish and the indigenous languages (which typically hinders borrowing, especially of morphological elements).
Speech acts are acts that can, but need not, be carried out by saying and meaning that one is doing so. Many view speech acts as the central units of communication, with phonological, morphological, syntactic, and semantic properties of an utterance serving as ways of identifying whether the speaker is making a promise, a prediction, a statement, or a threat. Some speech acts are momentous, since an appropriate authority can, for instance, declare war or sentence a defendant to prison, by saying that he or she is doing so. Speech acts are typically analyzed into two distinct components: a content dimension (corresponding to what is being said), and a force dimension (corresponding to how what is being said is being expressed). The grammatical mood of the sentence used in a speech act signals, but does not uniquely determine, the force of the speech act being performed. A special type of speech act is the performative, which makes explicit the force of the utterance. Although it has been famously claimed that performatives such as “I promise to be there on time” are neither true nor false, current scholarly consensus rejects this view. The study of so-called infelicities concerns the ways in which speech acts might either be defective (say by being insincere) or fail completely.
Recent theorizing about speech acts tends to fall either into conventionalist or intentionalist traditions: the former sees speech acts as analogous to moves in a game, with such acts being governed by rules of the form “doing A counts as doing B”; the latter eschews game-like rules and instead sees speech acts as governed by communicative intentions only. Debate also arises over the extent to which speakers can perform one speech act indirectly by performing another. Skeptics about the frequency of such events contend that many alleged indirect speech acts should be seen instead as expressions of attitudes. New developments in speech act theory also situate them in larger conversational frameworks, such as inquiries, debates, or deliberations made in the course of planning. In addition, recent scholarship has identified a type of oppression against under-represented groups as occurring through “silencing”: a speaker attempts to use a speech act to protect her autonomy, but the putative act fails due to her unjust milieu.
Kodi Weatherholtz and T. Florian Jaeger
The seeming ease with which we usually understand each other belies the complexity of the processes that underlie speech perception. One of the biggest computational challenges is that different talkers realize the same speech categories (e.g., /p/) in physically different ways. We review the mixture of processes that enable robust speech understanding across talkers despite this lack of invariance. These processes range from automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization) to implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration) to the generalization of these patterns across groups of talkers (e.g., gender differences).
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.
Beata Moskal and Peter W. Smith
Headedness is a pervasive phenomenon throughout different components of the grammar, which fundamentally encodes an asymmetry between two or more items, such that one is in some sense more important than the other(s). In phonology for instance, the nucleus is the head of the syllable, and not the onset or the coda, whereas in syntax, the verb is the head of a verb phrase, rather than any complements or specifiers that it combines with. It makes sense, then, to question whether the notion of headedness applies to the morphology as well; specifically, do words—complex or simplex—have heads that determine the properties of the word as a whole? Intuitively it makes sense that words have heads: a noun that is derived from an adjective like redness can function only as a noun, and the presence of red in the structure does not confer on the whole form the ability to function as an adjective as well.
However, this question is a complex one for a variety of reasons. While it seems clear for some phenomena such as category determination that words have heads, there is a lot of evidence to suggest that the properties of complex words are not all derived from one morpheme, but rather that the features are gathered from potentially numerous morphemes within the same word. Furthermore, properties that characterize heads compared to dependents, particularly based on syntactic behavior, do not unambigously pick out a single element, but the tests applied to morphology at times pick out affixes, and at times pick out bases as the head of the whole word.
Ljuba N. Veselinova
The term suppletion is used to indicate the unpredictable encoding of otherwise regular semantic or grammatical relations. Standard examples in English include the present and past tense of the verb go, cf. go vs. went, or the comparative and superlative forms of adjectives such as good or bad, cf. good vs. better vs. best, or bad vs. worse vs. worst.
The complementary distribution of different forms to express a paradigmatic contrast has been noticed already in early grammatical traditions. However, the idea that a special form would supply missing forms in a paradigm was first introduced by the neogrammarian Hermann Osthoff, in his work of 1899. The concept of suppletion was consolidated in modern linguistics by Leonard Bloomfield, in 1926. Since then, the notion has been applied to both affixes and stems. In addition to the application of the concept to linguistic units of varying morpho-syntactic status, such as affixes, or stems of different lexical classes such as, for instance, verbs, adjectives, or nouns, the student should also be prepared to encounter frequent discrepancies between uses of the concept in the theoretical literature and its application in more descriptively oriented work. There are models in which the term suppletion is restricted to exceptions to inflectional patterns only; consequently, exceptions to derivational patterns are not accepted as instantiations of the phenomenon. Thus, the comparative degrees of adjectives will be, at best, less prototypical examples of suppletion.
Treatments of the phenomenon vary widely, to the point of being complete opposites. A strong tendency exists to regard suppletion as an anomaly, a historical artifact, and generally of little theoretical interest. A countertendency is to view the phenomenon as challenging, but nonetheless very important for adequate theory formation. Finally, there are scholars who view suppletion as a functionally motivated result of language change.
For a long time, the database on suppletion, similarly to many other phenomena, was restricted to Indo-European languages. With the solidifying of wider cross-linguistic research and linguistic typology since the 1990s, the database on suppletion has been substantially extended. Large-scale cross-linguistic studies have shown that the phenomenon is observed in many different languages around the globe. In addition, it appears as a systematic cross-linguistic phenomenon in that it can be correlated with well-defined language areas, language families, specific lexemic groups, and specific slots in paradigms. The latter can be shown to follow general markedness universals. Finally, the lexemes that show suppletion tend to have special functions in both lexicon and grammar.
Ur Shlonsky and Giuliano Bocci
Syntactic cartography emerged in the 1990s as a result of the growing consensus in the field about the central role played by functional elements and by morphosyntactic features in syntax. The declared aim of this research direction is to draw maps of the structures of syntactic constituents, characterize their functional structure, and study the array and hierarchy of syntactically relevant features. Syntactic cartography has made significant empirical discoveries, and its methodology has been very influential in research in comparative syntax and morphosyntax. A central theme in current cartographic research concerns the source of the emerging featural/structural hierarchies. The idea that the functional hierarchy is not a primitive of Universal Grammar but derives from other principles does not undermine the scientific relevance of the study of the cartographic structures. On the contrary, the cartographic research aims at providing empirical evidence that may help answer these questions about the source of the hierarchy and shed light on how the computational principles and requirements of the interface with sound and meaning interact.
Syntactic features are formal properties of syntactic objects which determine how they behave with respect to syntactic constraints and operations (such as selection, licensing, agreement, and movement). Syntactic features can be contrasted with properties which are purely phonological, morphological, or semantic, but many features are relevant both to syntax and morphology, or to syntax and semantics, or to all three components.
The formal theory of syntactic features builds on the theory of phonological features, and normally takes morphosyntactic features (those expressed in morphology) to be the central case, with other, possibly more abstract features being modeled on the morphosyntactic ones.
Many aspects of the formal nature of syntactic features are currently unresolved. Some traditions (such as HPSG) make use of rich feature structures as an analytic tool, while others (such as Minimalism) pursue simplicity in feature structures in the interest of descriptive restrictiveness. Nevertheless, features are essential to all explicit analyses.
Heidi Harley and Shigeru Miyagawa
Ditransitive predicates select for two internal arguments, and hence minimally entail the participation of three entities in the event described by the verb. Canonical ditransitive verbs include give, show, and teach; in each case, the verb requires an agent (a giver, shower, or teacher, respectively), a theme (the thing given, shown, or taught), and a goal (the recipient, viewer, or student). The property of requiring two internal arguments makes ditransitive verbs syntactically unique. Selection in generative grammar is often modeled as syntactic sisterhood, so ditransitive verbs immediately raise the question of whether a verb may have two sisters, requiring a ternary-branching structure, or whether one of the two internal arguments is not in a sisterhood relation with the verb.
Another important property of English ditransitive constructions is the two syntactic structures associated with them. In the so-called “double object construction,” or DOC, the goal and theme both are simple NPs and appear following the verb in the order V-goal-theme. In the “dative construction,” the goal is a PP rather than an NP and follows the theme in the order V-theme-to goal. Many ditransitive verbs allow both structures (e.g., give John a book/give a book to John). Some verbs are restricted to appear only in one or the other (e.g. demonstrate a technique to the class/*demonstrate the class a technique; cost John $20/*cost $20 to John). For verbs which allow both structures, there can be slightly different interpretations available for each. Crosslinguistic results reveal that the underlying structural distinctions and their interpretive correlates are pervasive, even in the face of significant surface differences between languages. The detailed analysis of these questions has led to considerable progress in generative syntax. For example, the discovery of the hierarchical relationship between the first and second arguments of a ditransitive has been key in motivating the adoption of binary branching and the vP hypothesis. Many outstanding questions remain, however, and the syntactic encoding of ditransitivity continues to inform the development of grammatical theory.
Sónia Frota and Marina Vigário
The syntax–phonology interface refers to the way syntax and phonology are interconnected. Although syntax and phonology constitute different language domains, it seems undisputed that they relate to each other in nontrivial ways. There are different theories about the syntax–phonology interface. They differ in how far each domain is seen as relevant to generalizations in the other domain, and in the types of information from each domain that are available to the other.
Some theories see the interface as unlimited in the direction and types of syntax–phonology connections, with syntax impacting on phonology and phonology impacting on syntax. Other theories constrain mutual interaction to a set of specific syntactic phenomena (i.e., discourse-related) that may be influenced by a limited set of phonological phenomena (namely, heaviness and rhythm). In most theories, there is an asymmetrical relationship: specific types of syntactic information are available to phonology, whereas syntax is phonology-free.
The role that syntax plays in phonology, as well as the types of syntactic information that are relevant to phonology, is also a matter of debate. At one extreme, Direct Reference Theories claim that phonological phenomena, such as external sandhi processes, refer directly to syntactic information. However, approaches arguing for a direct influence of syntax differ on the types of syntactic information needed to account for phonological phenomena, from syntactic heads and structural configurations (like c-command and government) to feature checking relationships and phase units. The precise syntactic information that is relevant to phonology may depend on (the particular version of) the theory of syntax assumed to account for syntax–phonology mapping. At the other extreme, Prosodic Hierarchy Theories propose that syntactic and phonological representations are fundamentally distinct and that the output of the syntax–phonology interface is prosodic structure. Under this view, phonological phenomena refer to the phonological domains defined in prosodic structure. The structure of phonological domains is built from the interaction of a limited set of syntactic information with phonological principles related to constituent size, weight, and eurhythmic effects, among others. The kind of syntactic information used in the computation of prosodic structure distinguishes between different Prosodic Hierarchy Theories: the relation-based approach makes reference to notions like head-complement, modifier-head relations, and syntactic branching, while the end-based approach focuses on edges of syntactic heads and maximal projections. Common to both approaches is the distinction between lexical and functional categories, with the latter being invisible to the syntax–phonology mapping. Besides accounting for external sandhi phenomena, prosodic structure interacts with other phonological representations, such as metrical structure and intonational structure.
As shown by the theoretical diversity, the study of the syntax–phonology interface raises many fundamental questions. A systematic comparison among proposals with reference to empirical evidence is lacking. In addition, findings from language acquisition and development and language processing constitute novel sources of evidence that need to be taken into account. The syntax–phonology interface thus remains a challenging research field in the years to come.
Erich R. Round
The non–Pama-Nyugan, Tangkic languages were spoken until recently in the southern Gulf of Carpentaria, Australia. The most extensively documented are Lardil, Kayardild, and Yukulta. Their phonology is notable for its opaque, word-final deletion rules and extensive word-internal sandhi processes. The morphology contains complex relationships between sets of forms and sets of functions, due in part to major historical refunctionalizations, which have converted case markers into markers of tense and complementization and verbal suffixes into case markers. Syntactic constituency is often marked by inflectional concord, resulting frequently in affix stacking. Yukulta in particular possesses a rich set of inflection-marking possibilities for core arguments, including detransitivized configurations and an inverse system. These relate in interesting ways historically to argument marking in Lardil and Kayardild. Subordinate clauses are marked for tense across most constituents other than the subject, and such tense marking is also found in main clauses in Lardil and Kayardild, which have lost the agreement and tense-marking second-position clitic of Yukulta. Under specific conditions of co-reference between matrix and subordinate arguments, and under certain discourse conditions, clauses may be marked, on all or almost all words, by complementization markers, in addition to inflection for case and tense.
This article introduces two phenomena that are studied within the domain of templatic morphology—clippings and word-and-pattern morphology, where the latter is usually associated with Semitic morphology. In both cases, the words are of invariant shape, sharing a prosodic structure defined in terms of number of syllables. This prosodic template, being the core of the word structure, is often accompanied with one or more of the following properties: syllable structure, vocalic pattern, and an affix. The data in this article, drawn from different languages, display the various ways in which these structural properties are combined to determine the surface structure of the word. The invariant shape of Japanese clippings (e.g., suto ← sutoraiki ‘strike’) consists of a prosodic template alone, while that of English hypocoristics (e.g., Trudy ← Gertrude) consists of a prosodic template plus the suffix -i. The Arabic verb classes, such as class-I (e.g., sakan ‘to live’) and class-II (e.g., misek ‘to hold’), display a prosodic template plus a vocalic pattern, and the Hebrew verb class-III (e.g., hivdil ‘to distinguish’) displays a prosodic template, a vocalic pattern and a prefix. Given these structural properties, the relation between a base and its derived form is expressed in terms of stem modification, which involves truncation (for the prosodic template) and melodic overwriting (for the vocalic pattern). The discussion in this article suggests that templatic morphology is not limited to a particular lexicon type – core or periphery, but it displays different degrees of restrictiveness.
Distinctions of time are among the most common notions expressed in morphology cross-linguistically. But the inventories of distinctions marked in individual languages are also varied. Some languages have few if any morphological markers pertaining to time, while others have extensive sets. Certain categories do recur pervasively across languages, but even these can vary subtly or even substantially in their uses. And they may be optional or obligatory.
The grammar of time is traditionally divided into two domains: tense and aspect. Tense locates situations in time. Tense markers place them along a timeline with respect to some point of reference, a deictic center. The most common reference point is the moment of speech. Many languages have just three tense categories: past for situations before the time of speech, present for those overlapping with the moment of speech, and future for those subsequent to the moment of speech. But many languages have no morphological tense, some have just two categories, and some have many more. In some languages, morphological distinctions correspond fairly closely to identifiable times. There may, for example, be a today (hodiernal) past that contrasts with a yesterday (hesternal) past. In other languages, tense distinctions are more fluid. A recent past might be interpreted as ‘some time earlier today’ for a sentence meaning ‘I ate a banana’, but ‘within the last few months’ for a sentence meaning ‘I returned from Africa’. Languages also vary in the mobility of the deictic center. In some languages tense distinctions are systematically calibrated with respect to the moment of speaking. In others, the deictic center may shift. It may be established by the matrix clause in a complex sentence. Or it may be established by a larger topic of discussion. Tense is most often a verbal category, because verbs generally portray the most dynamic elements of a situation, but a number of languages distinguish tense on nouns as well.
Aspect characterizes the internal temporal structure of a situation. There may be different forms of a verb ‘eat’, for example, in sentences meaning ‘I ate lamb chops’, ‘I was eating lamb chops’, and ‘I used to eat lamb chops’, though all are past tense. They may pick out one phase of the situation, with different forms for ‘I began to eat’, ‘I was eating’, and ‘I ate it up’. They may make finer distinctions, with different forms for ‘I took a bite’, ‘I nibbled’, and ‘I kept eating’. Morphological aspect distinctions are usually marked on verbs, but in some languages they can be marked on nominals as well.
In some languages, there is a clear separation between the two: tense is expressed in one part of the morphology, and aspect in another. But often a single marker conveys both: a single suffix might mark both past tense and progressive aspect in a sentence meaning ‘I was eating’, for example. A tense distinction may be made only in a particular aspect, and/or a certain aspect distinction marked only in a particular tense. Like other areas of grammar, tense and aspect systems are constantly evolving. The meanings of markers can shift over time, as speakers apply them to new contexts, and as new markers enter the system, taking over some of their functions. Markers can shift for example from aspect to tense, or from derivation to inflection. The gradualness of such developments underlies the cross-linguistic differences we find in tense and aspect categories.
There is a rich literature on tense and aspect. As more is learned about the inventories of categories that exist in individual languages and the ways speakers deploy them, theoretical models continue to grow in sophistication.
The concept of Africa requires reflection: what does it mean to study a social phenomenon “in Africa”? Technology use in Africa is complex and diverse, showing various degrees of access across the continent (and in the Diaspora, and digital social inequalities—which are part and parcel of the political economy of communication—shape digital engagement. The rise of mobile phones, in particular, has enabled the emergence of technologically mediated literacies, text-messaging among them. Text-messaging is defined not only by a particular mode of communication (typically written on mobile phones, visual, digital), but it also favors particular topics (intimate, relational, sociable, ludic) and ways of writing (short, non-standard texts that are creative as well as multilingual). The genre of text-messaging thus includes not only short message service (SMS) and (mobile) instant-messaging (which one might call prototypical one-to-one text messages), but also Twitter, an application that, like texting, favors brevity of expression and allows for one-to-many conversations. Access to Twitter is still limited for many Africans, but as ownership of smartphones is growing, so is Twitter use, and the African “Twittersphere” is emerging as an important pan-African space. At times, discussions are very local (as on Ghanaian Twitter), at other times regional (East African Twitter) or global (African Twitter and Black Twitter); all these are emic, folksonomic terms, assigned and discussed by users. Although former colonial languages, especially English, dominate in many prototypical text messages and on Twitter, the genre also provides important opportunities for writing in African languages. The choices made in the digital space echo the well-known debate between Chinua Achebe and Ngũgĩ wa Thiong’o: the Africanization of the former colonial languages versus writing in African languages. In addition, digital writers engage in multilingual writing, combining diverse languages in one text, and thus offer new ways of writing locally as well as shaping a digitally-mediated pan-African voice that draws on global strategies as well as local meaning.
Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished.
While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis.
In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.
In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.