American structuralism is a label attached to a heterogeneous but distinctive style of language scholarship practiced in the United States, the heyday of which extended from around 1920 until the late 1950s. There is certainly diversity in the interests and intellectual stances of American structuralists. Nevertheless, some minimum common denominators stand out. American structuralists valued synchronic linguistic analysis, independent of—but not to the exclusion of—study of a language’s development over time; they looked for, and tried to articulate, systematic patterns in language data, attending in particular to the sound properties of language and to morphophonology; they identified their work as part of a science of language, rather than as philology or as a facet of literary studies, anthropology, or the study of particular languages. Some American structuralists tried to establish the identity or difference of linguistic units by studying their distribution with respect to other units, rather than by relying on identity or difference of meaning. Some (but not all) American structuralists avoided cross-linguistic generalizations, perceiving them as a threat to the hard-won notion of the integrity of individual languages; some (but not all) avoided attributing patterns they discovered in particular languages to cultural or psychological proclivities of speakers. A considerable amount of American structuralist research focused on indigenous languages of the Americas. One outstanding shared achievement of the group was the institutionalization of linguistics as an autonomous discipline in the United States, materialized by the founding of the Linguistic Society of America in 1924.
This composite picture of American structuralists needs to be balanced by recognition of their diversity. One important distinction is between the goals and orientations of foundational figures: Franz Boas (1858–1942), Edward Sapir (1884–1939), and Leonard Bloomfield (1887–1949). The influence of Boas, Sapir, and Bloomfield was strongly felt by the next generation of language scholars, who went on to appropriate, expand, modify, or otherwise retouch their ideas to produce what is called post-Bloomfieldian linguistics. Post-Bloomfieldian linguistics displays its own internal diversity, but still has enough coherence to put into relief the work of other language scholars who were close contemporaries to the post-Bloomfieldians, but who in various ways and for various reasons departed from them. American structuralism has at least this much heterogeneity.
This article illustrates the character of American structuralism in the first half of the 20th century. Analysis of a corpus of presidential addresses presented to the Linguistic Society of America by key American structuralists grounds the discussion, and provides a microcosm within which to observe some of its most salient features: both the shared preoccupations of American structuralists and evidence of the contributions of individual scholars to a significant collaborative project in the history of linguistics.
Analogy is traditionally regarded as one of the three main factors responsible for language change, along with sound change and borrowing. Whereas sound change is understood to be phonetically motivated and blind to structural patterns and semantic and functional relationships, analogy is licensed precisely by those patterns and relationships. In the Neogrammarian tradition, analogical change is regarded, at least largely, as a by-product of the normal operation (acquisition, representation, and use) of the mental grammar. Historical linguists commonly use proportional equations of the form A : B = C : X to represent analogical innovations, where A, B, and C are (sets of) word forms known to the innovator, who solves for X by discerning a formal relationship between A and B and then deductively arriving at a form that is related to C in the same way that B is related to A.
Along with the core type of analogical change captured by proportional equations, most historical linguists include a number of other phenomena under the analogy umbrella. Some of these, such as paradigm leveling—the reduction or elimination of stem alternations in paradigms—are arguably largely proportional, but others such as contamination and folk etymology seem to have less to do with the normal operation of the mental grammar and instead involve some kind of interference among the mental representations of phonetically or semantically similar forms.
The Neogrammarian approach to analogical change has been criticized and challenged on a variety of grounds, and a number of important scholars use the term “analogy” in a rather different sense, to refer to the role that phonological and/or semantic similarity play in the influence that forms exert on each other.
Alan Reed Libert
Artificial languages—languages which have been consciously designed—have been created for more than 900 years, although the number of them has increased considerably in recent decades, and by the early 21st century the total figure probably was in the thousands. There have been several goals behind their creation; the traditional one (which applies to some of the best-known artificial languages, including Esperanto) is to make international communication easier. Some other well-known artificial languages, such as Klingon, have been designed in connection with works of fiction. Still others are simply personal projects.
A traditional way of classifying artificial languages involves the extent to which they make use of material from natural languages. Those artificial languages which are created mainly by taking material from one or more natural languages are called a posteriori languages (which again include well-known languages such as Esperanto), while those which do not use natural languages as sources are a priori languages (although many a posteriori languages have a limited amount of a priori material, and some a priori languages have a small number of a posteriori components). Between these two extremes are the mixed languages, which have large amounts of both a priori and a posteriori material. Artificial languages can also be classified typologically (as natural languages are) and by how and how much they have been used.
Many linguists seem to be biased against research on artificial languages, although some major linguists of the past have been interested in them.
Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century
Toon Van Hal
The Early Modern interest taken in language was intense and versatile. In this period, language education gradually no longer centered solely on Latin. The linguistic scope widened considerably, partly as a result of scholarly curiosity, although religious and missionary zeal, commercial considerations, and political motives were also of decisive significance. Statesmen discovered the political power of standardized vernaculars in the typically Early Modern process of state formation. The widening of the linguistic horizon was, first and foremost, reflected in a steadily increasing production of grammars and dictionaries, along with pocket textbooks, conversational manuals, and spelling treatises. One strategy of coping with the stunning linguistic diversity consisted of first collecting data on as many languages as possible and then tracing elements that were common to all or to certain groups of languages. Language comparison was not limited to historical and genealogical endeavors, as scholars started also to compare a number of languages in terms of their alleged vices and qualities. Another way of dealing with the flood of linguistic data consisted of focusing on what the different languages had in common, which led to the development of general grammars, of which the 17th-century Port-Royal grammar is the most well-known. During the Enlightenment, the nature of language and its cognitive merits or vices became also a central theme in philosophical debates in which major thinkers were actively engaged.
The differentiation of English into separate varieties in the regions of Britain and Ireland has a long history. This is connected with the separate but related identities of England, Wales, Scotland, and Ireland. In this chapter the main linguistic traits of the regions are described and discussed within the framework of language variation and change, an approach to linguistic differentiation that attempts to identify patterns of speaker social behavior and trajectories along which varieties develop.
The section on England is subdivided into rural and urban forms of English, the former associated with the broad regions of the North, the Midlands, East Anglia, the Southeast and South, and the West Country. For urban varieties English in the cities of London, Norwich, Milton Keynes, Bristol, Liverpool, and Newcastle upon Tyne is discussed in the light of the available data and existing scholarship. English in the Celtic regions of Britain and Ireland is examined in dedicated sections on Scotland, Wales, and Ireland. Finally, varieties of English found on the smaller islands around Britain form the focus, i.e., English on the Orkney and Shetland islands, the Isle of Man, and the Channel Islands.
John E. Joseph
Ferdinand de Saussure (1857–1913), the founding figure of modern linguistics, made his mark on the field with a book he published a month after his 21st birthday, in which he proposed a radical rethinking of the original system of vowels in Proto-Indo-European. A year later, he submitted his doctoral thesis on a morpho-syntactic topic, the genitive absolute in Sanskrit, to the University of Leipzig. He went to Paris intending to do a second, French doctorate, but instead he was given responsibility for courses on Gothic and Old High Gerrman at the École Pratique des Hautes Études, and for managing the publications of the Société de Linguistique de Paris. He abandoned more than one large publication project of his own during the decade he spent in Paris. In 1891 he returned to his native Geneva, where the University created a chair in Sanskrit and the history and comparison of languages for him. He produced some significant work on Lithuanian during this period, connected to his early book on the Indo-European vowel system, and yielding Saussure’s Law, concerning the placement of stress in Lithuanian. He undertook writing projects about the general nature of language, but again abandoned them. In 1907, 1908–1909, and 1910–1911, he gave three courses in general linguistics at the University of Geneva, in which he developed an approach to languages as systems of signs, each sign consisting of a signifier (sound pattern) and a signified (concept), both of them mental rather than physical in nature, and conjoined arbitrarily and inseparably. The socially shared language system, or langue, makes possible the production and comprehension of parole, utterances, by individual speakers and hearers. Each signifier and signified is a value generated by its difference from all the other signifiers or signifieds with which it coexists on an associative (or paradigmatic) axis, and affected as well by its syntagmatic axis. Shortly after Saussure’s death at 55, two of his colleagues, Bally and Sechehaye, gathered together students’ notes from the three courses, as well as manuscript notes by Saussure, and from them constructed the Cours de linguistique générale, published in 1916. Over the course of the next several decades, this book became the basis for the structuralist approach, initially within linguistics, and later adapted to other fields. Saussure left behind a large quantity of manuscript material that has gradually been published over the last few decades, and continues to be published, shedding new light on his thought.
The German sinologist and general linguist Georg von der Gabelentz (1840–1893) occupies an interesting place at the intersection of several streams of linguistic scholarship at the end of the 19th century. As Professor of East Asian languages at the University of Leipzig from 1878 to 1889 and then Professor for Sinology and General Linguistics at the University of Berlin from 1889 until his death, Gabelentz was present at some of the main centers of linguistics at the time. He was, however, generally critical of mainstream historical-comparative linguistics as propagated by the neogrammarians, and instead emphasized approaches to language inspired by a line of researchers including Wilhelm von Humboldt (1767–1835), H. Steinthal (1823–1899), and his own father, Hans Conon von der Gabelentz (1807–1874).
Today Gabelentz is chiefly remembered for several theoretical and methodological innovations which continue to play a role in linguistics. Most significant among these are his contributions to cross-linguistic syntactic comparison and typology, grammar-writing, and grammaticalization. His earliest linguistic work emphasized the importance of syntax as a core part of grammar and sought to establish a framework for the cross-linguistic description of word order, as had already been attempted for morphology by other scholars. The importance he attached to syntax was motivated by his engagement with Classical Chinese, a language almost devoid of morphology and highly reliant on syntax. In describing this language in his 1881 Chinesische Grammatik, Gabelentz elaborated and implemented the complementary “analytic” and “synthetic” systems of grammar, an approach to grammar-writing that continues to serve as a point of reference up to the present day. In his summary of contemporary thought on the nature of grammatical change in language, he became one of the first linguists to formulate the principles of grammaticalization in essentially the form that this phenomenon is studied today, although he did not use the current term. One key term of modern linguistics that he did employ, however, is “typology,” a term that he in fact coined. Gabelentz’s typology was a development on various contemporary strands of thought, including his own comparative syntax, and is widely acknowledged as a direct precursor of the present-day field.
Gabelentz is a significant transitional figure from the 19th to the 20th century. On the one hand, his work seems very modern. Beyond his contributions to grammaticalization avant la lettre and his christening of typology, his conception of language prefigures the structuralist revolution of the early 20th century in important respects. On the other hand, he continues to entertain several preoccupations of the 19th century—in particular the judgment of the relative value of different languages—which were progressively banished from linguistics in the first decades of the 20th century.
The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages:
The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth.
The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context.
The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–).
The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.
Silvio Moreira de Sousa, Johannes Mücke, and Philipp Krämer
As an institutionalized subfield of academic research, Creole studies (or Creolistics) emerged in the second half of the 20th century on the basis of pioneering works in the last decades of the 19th century and first half of the 20th century. Yet its research traditions—just like the Creole languages themselves—are much older and are deeply intertwined with the history of European colonialism, slavery, and Christian missionary activities all around the globe. Throughout the history of research, creolists focused on the emergence of Creole languages and their grammatical structures—often in comparison to European colonial languages. In connection with the observations in grammar and history, creolists discussed theoretical matters such as the role of language acquisition in creolization, the status of Creoles among the other languages in the world, and the social conditions in which they are or were spoken. These discussions molded the way in which the acquired knowledge was transmitted to the following generations of creolists.
The grammatization of European vernacular languages began in the Late Middle Ages and Renaissance and continued up until the end of the 18th century. Through this process, grammars were written for the vernaculars and, as a result, the vernaculars were able to establish themselves in important areas of communication. Vernacular grammars largely followed the example of those written for Latin, using Latin descriptive categories without fully adapting them to the vernaculars. In accord with the Greco-Latin tradition, the grammars typically contain sections on orthography, prosody, morphology, and syntax, with the most space devoted to the treatment of word classes in the section on “etymology.” The earliest grammars of vernaculars had two main goals: on the one hand, making the languages described accessible to non-native speakers, and on the other, supporting the learning of Latin grammar by teaching the grammar of speakers’ native languages. Initially, it was considered unnecessary to engage with the grammar of native languages for their own sake, since they were thought to be acquired spontaneously. Only gradually did a need for normative grammars develop which sought to codify languages. This development relied on an awareness of the value of vernaculars that attributed a certain degree of perfection to them. Grammars of indigenous languages in colonized areas were based on those of European languages and today offer information about the early state of those languages, and are indeed sometimes the only sources for now extinct languages. Grammars of vernaculars came into being in the contrasting contexts of general grammar and the grammars of individual languages, between grammar as science and as art and between description and standardization. In the standardization of languages, the guiding principle could either be that of anomaly, which took a particular variety of a language as the basis of the description, or that of analogy, which permitted interventions into a language aimed at making it more uniform.
Ans van Kemenade
The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.
Ever since the fundamental studies carried out by the great German Romanist Max Leopold Wagner (b. 1880–d. 1962), the acknowledged founder of scientific research on Sardinian, the lexicon has been, and still is, one of the most investigated and best-known areas of the Sardinian language.
Several substrate components stand out in the Sardinian lexicon around a fundamental layer which has a clear Latin lexical background. The so-called Paleo-Sardinian layer is particularly intriguing. This is a conventional label for the linguistic varieties spoken in the prehistoric and protohistoric ages in Sardinia. Indeed, the relatively large amount of words (toponyms in particular) which can be traced back to this substrate clearly distinguishes the Sardinian lexicon within the panorama of the Romance languages. As for the other Pre-Latin substrata, the Phoenician-Punic presence mainly (although not exclusively) affected southern and western Sardinia, where we find the highest concentration of Phoenician-Punic loanwords.
On the other hand, recent studies have shown that the Latinization of Sardinia was more complex than once thought. In particular, the alleged archaic nature of some features of Sardinian has been questioned.
Moreover, research carried out in recent decades has underlined the importance of the Greek Byzantine superstrate, which has actually left far more evident lexical traces than previously thought. Finally, from the late Middle Ages onward, the contributions from the early Italian, Catalan, and Spanish superstrates, as well as from modern and contemporary Italian, have substantially reshaped the modern-day profile of the Sardinian lexicon. In these cases too, more recent research has shown a deeper impact of these components on the Sardinian lexicon, especially as regards the influence of Italian.
Irit Meir and Oksana Tkachman
Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry.
There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world.
Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.
During the period from the fall of the Roman empire in the late 5th century to the beginning of the European Renaissance in the 14th century, the development of linguistic thought in Europe was characterized by the enthusiastic study of grammatical works by Classical and Late Antique authors, as well as by the adaptation of these works to suit a Christian framework. The discipline of grammatica, viewed as the cornerstone of the ideal liberal arts education and as a key to the wider realm of textual culture, was understood to encompass both the systematic principles for speaking and writing correctly and the science of interpreting the poets and other writers. The writings of Donatus and Priscian were among the most popular and well-known works of the grammatical curriculum, and were the subject of numerous commentaries throughout the medieval period. Although Latin persisted as the predominant medium of grammatical discourse, there is also evidence from as early as the 8th century for the enthusiastic study of vernacular languages and for the composition of vernacular-medium grammars, including sources pertaining to Anglo-Saxon, Irish, Old Norse, and Welsh. The study of language in the later medieval period is marked by experimentation with the form and layout of grammatical texts, including the composition of textbooks in verse form. This period also saw a renewed interest in the application of philosophical ideas to grammar, inspired in part by the availability of a wider corpus of Greek sources than had previously been unknown to western European scholars, such as Aristotle’s Physics, Metaphysics, Ethics, and De Anime. A further consequence of the renewed interest in the logical and metaphysical works of Aristotle during the later Middle Ages is the composition of so-called ‘speculative grammars’ written by scholars commonly referred to as the ‘Modistae’, in which the grammatical description of Latin formulated by Priscian and Donatus was integrated with the system of scholastic philosophy that was at its height from the beginning of the 13th to the middle of the 14th century.
Traditional Chinese linguistics grew out of two distinct interests in language: the philosophical reflection on things and their names, and the practical concern for literacy education and the correct understanding of classical works. The former is most typically found in the teachings of such pre-Qin masters as Confucius, Mozi, and Gongsun Long, who lived between the 6th and 3rd centuries
The picture just presented, in which Chinese philosophy and philology are combined to form a seemingly autonomous tradition, is complicated, however, by the fact that the Indic linguistic tradition started to influence the Chinese in the 2nd century
Chinese, with its linguistic tradition, had a profound impact in ancient East Asia. Not only did traditional studies of Japanese, Tangut, and other languages show significant Chinese influence, under which not the least achievement was the invention of the earliest writing systems for these languages, but many scholars from Japan and Korea actually took an active part in the study of Chinese as well, so that the Chinese linguistic tradition would itself be incomplete without the materials and findings these non-Chinese scholars have contributed. On the other hand, some of these scholars, most notably Motoori Norinaga and Fujitani Nariakira in Japan, were able to free themselves from the character-centered Chinese routine and develop rather original linguistic theories.
Indian linguistic thought begins around the 8th–6th centuries
The greater part of documented thought is related to Sanskrit (Ancient Indo-Aryan). Very early, the oral transmission of sacred texts—the Vedas, composed in Vedic Sanskrit—made it necessary to develop techniques based on a subtle analysis of language. The Vedas also—but presumably later—gave birth to bodies of knowledge dealing with language, which are traditionally called Vedāṅgas: phonetics (śikṣā), metrics (chandas), grammar (vyākaraṇa), and semantic explanation (nirvacana, nirukta). Later on, Vedic exegesis (mīmāṃsā), new dialectics (navya-nyāya), lexicography, and poetics (alaṃkāra) also contributed to linguistic thought.
Though languages other than Sanskrit were described in premodern India, the grammatical description of Sanskrit—given in Sanskrit—dominated and influenced them more or less strongly. Sanskrit grammar (vyākaraṇa) has a long history marked by several major steps (Padapāṭha versions of Vedic texts, Aṣṭādhyāyī of Pāṇini, Mahābhāṣya of Patañjali, Bhartṛhari’s works, Siddhāntakaumudī of Bhaṭṭoji Dīkṣita, Nāgeśa’s works), and the main topics it addresses (minimal meaning-bearer units, classes of words, relation between word and meaning/referent, the primary meaning/referent of nouns) are still central issues for contemporary linguistics.
Missionary dictionaries are printed books or manuscripts compiled by missionaries in which words are listed systematically followed by words which have the same meaning in another language. These dictionaries were mainly written as tools for language teaching and learning in a missionary-colonial setting, although quite a few dictionaries have also a more encyclopedic character, containing invaluable information on non-Western cultures from all continents. In this article, several types of dictionaries are analyzed: bilingual-monodirectional, bilingual and bidirectional, and multilingual. Most examples are taken from an illustrative selected corpus of missionary dictionaries describing non-Western and languages during the colonial period, with particular focus on the function of these dictionaries in a missionary context, the users, macrostructure, organizational principles, and the typology of the microstructure and markedness in lemmatization.
Missionary grammars are printed books or manuscripts compiled by missionaries in which a particular language is described. These grammars were mainly written as pedagogical tools for language teaching and learning in a missionary-colonial setting, although quite a few grammars have also a more normative character. Missionary grammars contain usually an opening section, a prologue, in which the author exhibits the objectives of his work. The first part is usually a short introduction into phonology and orthography, followed by the largest section, which is devoted to morphology, arranged according to the traditional division of the parts of speech. The final section is sometimes devoted to syntax, but the topics included can vary considerably. Sometimes word lists are appended, containing body parts, measures, counting, manners of speaking, or rhetorical figures. The data presented in the grammar are mainly based on an oral corpus, whereas in other cases high registers from prestigious texts are used in which the eloquence or elegance of the language under study is illustrated. These grammars are modeled according to the traditional Greco-Latin framework and often contain invaluable information regarding language typologies, semantics, and pragmatics. In the New World, Asia, and elsewhere, missionaries had to find an adequate methodology in order to describe typological features they had never seen before. They adapted European models to new linguistic realities and created original works which deserve our attention within the discipline of the history of linguistics alongside contemporary pedagogical works written in Europe. This article concentrates on sources written in Spanish, Portuguese, and Latin during the colonial period, since these sources outnumber the production of missionary grammars in other languages.
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.