1-20 of 29 Results  for:

  • History of Linguistics x
Clear all

Article

Margaret Thomas

American structuralism is a label attached to a heterogeneous but distinctive style of language scholarship practiced in the United States, the heyday of which extended from around 1920 until the late 1950s. There is certainly diversity in the interests and intellectual stances of American structuralists. Nevertheless, some minimum common denominators stand out. American structuralists valued synchronic linguistic analysis, independent of—but not to the exclusion of—study of a language’s development over time; they looked for, and tried to articulate, systematic patterns in language data, attending in particular to the sound properties of language and to morphophonology; they identified their work as part of a science of language, rather than as philology or as a facet of literary studies, anthropology, or the study of particular languages. Some American structuralists tried to establish the identity or difference of linguistic units by studying their distribution with respect to other units, rather than by relying on identity or difference of meaning. Some (but not all) American structuralists avoided cross-linguistic generalizations, perceiving them as a threat to the hard-won notion of the integrity of individual languages; some (but not all) avoided attributing patterns they discovered in particular languages to cultural or psychological proclivities of speakers. A considerable amount of American structuralist research focused on indigenous languages of the Americas. One outstanding shared achievement of the group was the institutionalization of linguistics as an autonomous discipline in the United States, materialized by the founding of the Linguistic Society of America in 1924. This composite picture of American structuralists needs to be balanced by recognition of their diversity. One important distinction is between the goals and orientations of foundational figures: Franz Boas (1858–1942), Edward Sapir (1884–1939), and Leonard Bloomfield (1887–1949). The influence of Boas, Sapir, and Bloomfield was strongly felt by the next generation of language scholars, who went on to appropriate, expand, modify, or otherwise retouch their ideas to produce what is called post-Bloomfieldian linguistics. Post-Bloomfieldian linguistics displays its own internal diversity, but still has enough coherence to put into relief the work of other language scholars who were close contemporaries to the post-Bloomfieldians, but who in various ways and for various reasons departed from them. American structuralism has at least this much heterogeneity. This article illustrates the character of American structuralism in the first half of the 20th century. Analysis of a corpus of presidential addresses presented to the Linguistic Society of America by key American structuralists grounds the discussion, and provides a microcosm within which to observe some of its most salient features: both the shared preoccupations of American structuralists and evidence of the contributions of individual scholars to a significant collaborative project in the history of linguistics.

Article

David Fertig

Analogy is traditionally regarded as one of the three main factors responsible for language change, along with sound change and borrowing. Whereas sound change is understood to be phonetically motivated and blind to structural patterns and semantic and functional relationships, analogy is licensed precisely by those patterns and relationships. In the Neogrammarian tradition, analogical change is regarded, at least largely, as a by-product of the normal operation (acquisition, representation, and use) of the mental grammar. Historical linguists commonly use proportional equations of the form A : B = C : X to represent analogical innovations, where A, B, and C are (sets of) word forms known to the innovator, who solves for X by discerning a formal relationship between A and B and then deductively arriving at a form that is related to C in the same way that B is related to A. Along with the core type of analogical change captured by proportional equations, most historical linguists include a number of other phenomena under the analogy umbrella. Some of these, such as paradigm leveling—the reduction or elimination of stem alternations in paradigms—are arguably largely proportional, but others such as contamination and folk etymology seem to have less to do with the normal operation of the mental grammar and instead involve some kind of interference among the mental representations of phonetically or semantically similar forms. The Neogrammarian approach to analogical change has been criticized and challenged on a variety of grounds, and a number of important scholars use the term “analogy” in a rather different sense, to refer to the role that phonological and/or semantic similarity play in the influence that forms exert on each other.

Article

Alan Reed Libert

Artificial languages—languages which have been consciously designed—have been created for more than 900 years, although the number of them has increased considerably in recent decades, and by the early 21st century the total figure probably was in the thousands. There have been several goals behind their creation; the traditional one (which applies to some of the best-known artificial languages, including Esperanto) is to make international communication easier. Some other well-known artificial languages, such as Klingon, have been designed in connection with works of fiction. Still others are simply personal projects. A traditional way of classifying artificial languages involves the extent to which they make use of material from natural languages. Those artificial languages which are created mainly by taking material from one or more natural languages are called a posteriori languages (which again include well-known languages such as Esperanto), while those which do not use natural languages as sources are a priori languages (although many a posteriori languages have a limited amount of a priori material, and some a priori languages have a small number of a posteriori components). Between these two extremes are the mixed languages, which have large amounts of both a priori and a posteriori material. Artificial languages can also be classified typologically (as natural languages are) and by how and how much they have been used. Many linguists seem to be biased against research on artificial languages, although some major linguists of the past have been interested in them.

Article

Franz Rainer

Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking. In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features). Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century bc, when he stated that of two competing rules, the more restricted one had precedence. In the 1960s, this insight was revived by generative grammarians under the name “Elsewhere Principle,” which is still used in several grammatical theories (Distributed Morphology and Paradigm Function Morphology, among others). Alternatively, other theories, which go back to the German linguist Hermann Paul, have tackled the phenomenon on the basis of the mental lexicon. The great advantage of this latter approach is that it can account, in a natural way, for the crucial role played by frequency. Frequency is also crucial in the most promising theory, so-called statistical pre-emption, of how blocking can be learned.

Article

The Early Modern interest taken in language was intense and versatile. In this period, language education gradually no longer centered solely on Latin. The linguistic scope widened considerably, partly as a result of scholarly curiosity, although religious and missionary zeal, commercial considerations, and political motives were also of decisive significance. Statesmen discovered the political power of standardized vernaculars in the typically Early Modern process of state formation. The widening of the linguistic horizon was, first and foremost, reflected in a steadily increasing production of grammars and dictionaries, along with pocket textbooks, conversational manuals, and spelling treatises. One strategy of coping with the stunning linguistic diversity consisted of first collecting data on as many languages as possible and then tracing elements that were common to all or to certain groups of languages. Language comparison was not limited to historical and genealogical endeavors, as scholars started also to compare a number of languages in terms of their alleged vices and qualities. Another way of dealing with the flood of linguistic data consisted of focusing on what the different languages had in common, which led to the development of general grammars, of which the 17th-century Port-Royal grammar is the most well-known. During the Enlightenment, the nature of language and its cognitive merits or vices became also a central theme in philosophical debates in which major thinkers were actively engaged.

Article

The differentiation of English into separate varieties in the regions of Britain and Ireland has a long history. This is connected with the separate but related identities of England, Wales, Scotland, and Ireland. In this chapter the main linguistic traits of the regions are described and discussed within the framework of language variation and change, an approach to linguistic differentiation that attempts to identify patterns of speaker social behavior and trajectories along which varieties develop. The section on England is subdivided into rural and urban forms of English, the former associated with the broad regions of the North, the Midlands, East Anglia, the Southeast and South, and the West Country. For urban varieties English in the cities of London, Norwich, Milton Keynes, Bristol, Liverpool, and Newcastle upon Tyne is discussed in the light of the available data and existing scholarship. English in the Celtic regions of Britain and Ireland is examined in dedicated sections on Scotland, Wales, and Ireland. Finally, varieties of English found on the smaller islands around Britain form the focus, i.e., English on the Orkney and Shetland islands, the Isle of Man, and the Channel Islands.

Article

Markku Filppula and Juhani Klemola

Few European languages have in the course of their histories undergone as radical changes as English did in the medieval period. The earliest documented variety of the language, Old English (c. 450 to 1100 ce), was a synthetic language, typologically similar to modern German, with its three genders, relatively free word order, rich case system, and verbal morphology. By the beginning of the Middle English period (c. 1100 to 1500), changes that had begun a few centuries earlier in the Old English period had resulted in a remarkable typological shift from a synthetic language to an analytic language with fixed word order, very few inflections, and a heavy reliance on function words. System-internal pressures had a role to play in these changes, but arguably they were primarily due to intensive contacts with other languages, including Celtic languages, (British) Latin, Scandinavian languages, and a little later, French. As a result, English came to diverge from its Germanic sister languages, losing or reducing such Proto-Germanic features as grammatical gender; most inflections on nouns, adjectives, pronouns, and verbs; verb-second syntax; and certain types of reflexive marking. Among the external influences, long contacts with speakers of especially Brittonic Celtic languages (i.e., Welsh, Cornish, and Cumbrian) can be considered to have been of particular importance. Following the arrival of the Angles, Saxons, and Jutes from around 450 ce onward, there began an intensive and large-scale process of language shift on the part of the indigenous Celtic and British Latin speaking population in Britain. A general wisdom in contact linguistics is that in such circumstances—when the contact is intensive and the shifting population large enough—the acquired language (in this case English) undergoes moderate to heavy restructuring of its grammatical system, leading generally to simplification of its morphosyntax. In the history of English, this process was also greatly reinforced by the Viking invasions, which started in the late 8th century ce, and brought a large Scandinavian-speaking population to Britain. The resulting contacts between the Anglo-Saxons and the Vikings also contributed to the decrease of complexity of the Old English morphosyntax. In addition, the Scandinavian settlements of the Danelaw area left their permanent mark in place-names and dialect vocabulary in especially the eastern and northern parts of the country. In contrast to syntactic influences, which are typical of conditions of language shift, contacts that are less intensive and involve extensive bilingualism generally lead to lexical borrowing. This was the situation following the Norman Conquest of Britain in 1066 ce. It led to an influx of French loanwords into English, most of which have persisted in use up to the present day. It has been estimated that almost one third of the present-day English vocabulary is of French origin. By comparison, there is far less evidence of French influence on “core” English syntax. The earliest loanwords were superimposed by the French-speaking new nobility and pertained to administration, law, military terminology, and religion. Cultural prestige was the prime motivation for the later medieval borrowings.

Article

John E. Joseph

Ferdinand de Saussure (1857–1913), the founding figure of modern linguistics, made his mark on the field with a book he published a month after his 21st birthday, in which he proposed a radical rethinking of the original system of vowels in Proto-Indo-European. A year later, he submitted his doctoral thesis on a morpho-syntactic topic, the genitive absolute in Sanskrit, to the University of Leipzig. He went to Paris intending to do a second, French doctorate, but instead he was given responsibility for courses on Gothic and Old High Gerrman at the École Pratique des Hautes Études, and for managing the publications of the Société de Linguistique de Paris. He abandoned more than one large publication project of his own during the decade he spent in Paris. In 1891 he returned to his native Geneva, where the University created a chair in Sanskrit and the history and comparison of languages for him. He produced some significant work on Lithuanian during this period, connected to his early book on the Indo-European vowel system, and yielding Saussure’s Law, concerning the placement of stress in Lithuanian. He undertook writing projects about the general nature of language, but again abandoned them. In 1907, 1908–1909, and 1910–1911, he gave three courses in general linguistics at the University of Geneva, in which he developed an approach to languages as systems of signs, each sign consisting of a signifier (sound pattern) and a signified (concept), both of them mental rather than physical in nature, and conjoined arbitrarily and inseparably. The socially shared language system, or langue, makes possible the production and comprehension of parole, utterances, by individual speakers and hearers. Each signifier and signified is a value generated by its difference from all the other signifiers or signifieds with which it coexists on an associative (or paradigmatic) axis, and affected as well by its syntagmatic axis. Shortly after Saussure’s death at 55, two of his colleagues, Bally and Sechehaye, gathered together students’ notes from the three courses, as well as manuscript notes by Saussure, and from them constructed the Cours de linguistique générale, published in 1916. Over the course of the next several decades, this book became the basis for the structuralist approach, initially within linguistics, and later adapted to other fields. Saussure left behind a large quantity of manuscript material that has gradually been published over the last few decades, and continues to be published, shedding new light on his thought.

Article

James McElvenny

The German sinologist and general linguist Georg von der Gabelentz (1840–1893) occupies an interesting place at the intersection of several streams of linguistic scholarship at the end of the 19th century. As Professor of East Asian languages at the University of Leipzig from 1878 to 1889 and then Professor for Sinology and General Linguistics at the University of Berlin from 1889 until his death, Gabelentz was present at some of the main centers of linguistics at the time. He was, however, generally critical of mainstream historical-comparative linguistics as propagated by the neogrammarians, and instead emphasized approaches to language inspired by a line of researchers including Wilhelm von Humboldt (1767–1835), H. Steinthal (1823–1899), and his own father, Hans Conon von der Gabelentz (1807–1874). Today Gabelentz is chiefly remembered for several theoretical and methodological innovations which continue to play a role in linguistics. Most significant among these are his contributions to cross-linguistic syntactic comparison and typology, grammar-writing, and grammaticalization. His earliest linguistic work emphasized the importance of syntax as a core part of grammar and sought to establish a framework for the cross-linguistic description of word order, as had already been attempted for morphology by other scholars. The importance he attached to syntax was motivated by his engagement with Classical Chinese, a language almost devoid of morphology and highly reliant on syntax. In describing this language in his 1881 Chinesische Grammatik, Gabelentz elaborated and implemented the complementary “analytic” and “synthetic” systems of grammar, an approach to grammar-writing that continues to serve as a point of reference up to the present day. In his summary of contemporary thought on the nature of grammatical change in language, he became one of the first linguists to formulate the principles of grammaticalization in essentially the form that this phenomenon is studied today, although he did not use the current term. One key term of modern linguistics that he did employ, however, is “typology,” a term that he in fact coined. Gabelentz’s typology was a development on various contemporary strands of thought, including his own comparative syntax, and is widely acknowledged as a direct precursor of the present-day field. Gabelentz is a significant transitional figure from the 19th to the 20th century. On the one hand, his work seems very modern. Beyond his contributions to grammaticalization avant la lettre and his christening of typology, his conception of language prefigures the structuralist revolution of the early 20th century in important respects. On the other hand, he continues to entertain several preoccupations of the 19th century—in particular the judgment of the relative value of different languages—which were progressively banished from linguistics in the first decades of the 20th century.

Article

Béatrice Godart-Wendling

The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages: The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth. The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context. The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–). The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.

Article

Silvio Moreira de Sousa, Johannes Mücke, and Philipp Krämer

As an institutionalized subfield of academic research, Creole studies (or Creolistics) emerged in the second half of the 20th century on the basis of pioneering works in the last decades of the 19th century and first half of the 20th century. Yet its research traditions—just like the Creole languages themselves—are much older and are deeply intertwined with the history of European colonialism, slavery, and Christian missionary activities all around the globe. Throughout the history of research, creolists focused on the emergence of Creole languages and their grammatical structures—often in comparison to European colonial languages. In connection with the observations in grammar and history, creolists discussed theoretical matters such as the role of language acquisition in creolization, the status of Creoles among the other languages in the world, and the social conditions in which they are or were spoken. These discussions molded the way in which the acquired knowledge was transmitted to the following generations of creolists.

Article

The grammatization of European vernacular languages began in the Late Middle Ages and Renaissance and continued up until the end of the 18th century. Through this process, grammars were written for the vernaculars and, as a result, the vernaculars were able to establish themselves in important areas of communication. Vernacular grammars largely followed the example of those written for Latin, using Latin descriptive categories without fully adapting them to the vernaculars. In accord with the Greco-Latin tradition, the grammars typically contain sections on orthography, prosody, morphology, and syntax, with the most space devoted to the treatment of word classes in the section on “etymology.” The earliest grammars of vernaculars had two main goals: on the one hand, making the languages described accessible to non-native speakers, and on the other, supporting the learning of Latin grammar by teaching the grammar of speakers’ native languages. Initially, it was considered unnecessary to engage with the grammar of native languages for their own sake, since they were thought to be acquired spontaneously. Only gradually did a need for normative grammars develop which sought to codify languages. This development relied on an awareness of the value of vernaculars that attributed a certain degree of perfection to them. Grammars of indigenous languages in colonized areas were based on those of European languages and today offer information about the early state of those languages, and are indeed sometimes the only sources for now extinct languages. Grammars of vernaculars came into being in the contrasting contexts of general grammar and the grammars of individual languages, between grammar as science and as art and between description and standardization. In the standardization of languages, the guiding principle could either be that of anomaly, which took a particular variety of a language as the basis of the description, or that of analogy, which permitted interventions into a language aimed at making it more uniform.

Article

Ans van Kemenade

The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.

Article

Ever since the fundamental studies carried out by the great German Romanist Max Leopold Wagner (b. 1880–d. 1962), the acknowledged founder of scientific research on Sardinian, the lexicon has been, and still is, one of the most investigated and best-known areas of the Sardinian language. Several substrate components stand out in the Sardinian lexicon around a fundamental layer which has a clear Latin lexical background. The so-called Paleo-Sardinian layer is particularly intriguing. This is a conventional label for the linguistic varieties spoken in the prehistoric and protohistoric ages in Sardinia. Indeed, the relatively large amount of words (toponyms in particular) which can be traced back to this substrate clearly distinguishes the Sardinian lexicon within the panorama of the Romance languages. As for the other Pre-Latin substrata, the Phoenician-Punic presence mainly (although not exclusively) affected southern and western Sardinia, where we find the highest concentration of Phoenician-Punic loanwords. On the other hand, recent studies have shown that the Latinization of Sardinia was more complex than once thought. In particular, the alleged archaic nature of some features of Sardinian has been questioned. Moreover, research carried out in recent decades has underlined the importance of the Greek Byzantine superstrate, which has actually left far more evident lexical traces than previously thought. Finally, from the late Middle Ages onward, the contributions from the early Italian, Catalan, and Spanish superstrates, as well as from modern and contemporary Italian, have substantially reshaped the modern-day profile of the Sardinian lexicon. In these cases too, more recent research has shown a deeper impact of these components on the Sardinian lexicon, especially as regards the influence of Italian.

Article

Irit Meir and Oksana Tkachman

Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry. There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world. Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.

Article

During the period from the fall of the Roman empire in the late 5th century to the beginning of the European Renaissance in the 14th century, the development of linguistic thought in Europe was characterized by the enthusiastic study of grammatical works by Classical and Late Antique authors, as well as by the adaptation of these works to suit a Christian framework. The discipline of grammatica, viewed as the cornerstone of the ideal liberal arts education and as a key to the wider realm of textual culture, was understood to encompass both the systematic principles for speaking and writing correctly and the science of interpreting the poets and other writers. The writings of Donatus and Priscian were among the most popular and well-known works of the grammatical curriculum, and were the subject of numerous commentaries throughout the medieval period. Although Latin persisted as the predominant medium of grammatical discourse, there is also evidence from as early as the 8th century for the enthusiastic study of vernacular languages and for the composition of vernacular-medium grammars, including sources pertaining to Anglo-Saxon, Irish, Old Norse, and Welsh. The study of language in the later medieval period is marked by experimentation with the form and layout of grammatical texts, including the composition of textbooks in verse form. This period also saw a renewed interest in the application of philosophical ideas to grammar, inspired in part by the availability of a wider corpus of Greek sources than had previously been unknown to western European scholars, such as Aristotle’s Physics, Metaphysics, Ethics, and De Anime. A further consequence of the renewed interest in the logical and metaphysical works of Aristotle during the later Middle Ages is the composition of so-called ‘speculative grammars’ written by scholars commonly referred to as the ‘Modistae’, in which the grammatical description of Latin formulated by Priscian and Donatus was integrated with the system of scholastic philosophy that was at its height from the beginning of the 13th to the middle of the 14th century.

Article

Traditional Chinese linguistics grew out of two distinct interests in language: the philosophical reflection on things and their names, and the practical concern for literacy education and the correct understanding of classical works. The former is most typically found in the teachings of such pre-Qin masters as Confucius, Mozi, and Gongsun Long, who lived between the 6th and 3rd centuries bc, the latter in the enormous number of dictionaries, textbooks, and research works which, as a reflection of the fact that most Chinese morphemes are monosyllabic, are centered around the pronunciations, written forms, and meanings of these monosyllabic morphemes, or zi (“characters”) as they are called in Chinese. Apparently, it was the latter, philological, interest that motivated the bulk of the Chinese linguistic tradition, giving rise to such important works as Shuowen Jiezi and Qieyun, and culminating in the scholarship of the Qing Dynasty (1616–1911). But at the bottom, the philosophical concern never ceased to exist: The dominating idea that all things should have their rightful names just as they should occupy their rightful places in the universe, for example, was behind the compilation of Shuowen Jiezi and many other works. Further, the development of philology, or xiaoxue (“basic learning”), was strongly influenced by the study of philosophical thoughts, or daxue (“greater learning”), throughout its history. The picture just presented, in which Chinese philosophy and philology are combined to form a seemingly autonomous tradition, is complicated, however, by the fact that the Indic linguistic tradition started to influence the Chinese in the 2nd century ad, causing remarkable changes in the analyzing techniques (especially regarding character pronunciation), findings, and course of development of language studies in China. Most crucially, scholars began to realize that syllables had internal structures and that the pronunciation of one character could be represented by two others that shared the same initial and final with it respectively. This technique, known as fanqie, laid the basis for the illustrious 7th-century rhyme dictionary Qieyun, the rhyme table Yunjing, and a great many works that followed. These works, besides providing reference for verse composition (and, consequently, for the imperial examinations held to select government officials), proved such an essential tool in the philological study of classical works, that many Qing scholars, at the very height of traditional Chinese linguistics, regarded character pronunciation as central to xiaoxue and indispensable for the understanding of ancient texts. While character pronunciation received overwhelming attention, the studies of character form and meaning continued to develop, though they were frequently influenced by and sometimes combined with the study of character pronunciation, as in the analysis of the relations between Old Chinese sound categories and the phonetic components of Chinese characters and in their application in the exegetical investigation of classical texts. Chinese, with its linguistic tradition, had a profound impact in ancient East Asia. Not only did traditional studies of Japanese, Tangut, and other languages show significant Chinese influence, under which not the least achievement was the invention of the earliest writing systems for these languages, but many scholars from Japan and Korea actually took an active part in the study of Chinese as well, so that the Chinese linguistic tradition would itself be incomplete without the materials and findings these non-Chinese scholars have contributed. On the other hand, some of these scholars, most notably Motoori Norinaga and Fujitani Nariakira in Japan, were able to free themselves from the character-centered Chinese routine and develop rather original linguistic theories.

Article

André Thibault and Nicholas LoVecchio

The Romance languages have been involved in many situations of language contact. While language contact is evident at all levels, the most visible effects on the system of the recipient language concern the lexicon. The relationship between language contact and the lexicon raises some theoretical issues that are not always adequately addressed, including in etymological lexicography. First is the very notion of what constitutes “language contact.” Contrary to a somewhat dated view, language contact does not necessarily imply physical presence, contemporaneity, and orality: as far as the lexicon is concerned, contact can happen over time and space, particularly through written media. Depending on the kind of extralinguistic circumstances at stake, language contact can be induced by diverse factors, leading to different forms of borrowing. The misleading terms borrowings or loans mask the reality that these are actually adapted imitations—whether formal, semantic, or both—of a foreign model. Likewise, the common Latin or Greek origins of a huge proportion of the Romance lexicon often obscure the real history of words. As these classical languages have contributed numerous technical and scientific terms, as well as a series of “roots,” words coined in one Romance language can easily be reproduced in any other. However, simply reducing a word’s etymology to the origin of its components (classic or otherwise), ignoring intermediate stages and possibly intermediating languages in the borrowing process, is a distortion of word history. To the extent that it is useful to refer to “internationalisms,” related words in different Romance languages merit careful, often arduous research in the process of identifying the actual origin of a given coining. From a methodological point of view, it is crucial to distinguish between the immediate lending language and the oldest stage that can be identified, with the former being more relevant in a rigorous approach to comparative historical lexicology. Concrete examples from Ibero-Romania, Gallo-Romania, Italo-Romania, and Balkan-Romania highlight the variety of different Romance loans and reflect the diverse historical factors particular to each linguistic community in which borrowing occurred.

Article

Émilie Aussant

Indian linguistic thought begins around the 8th–6th centuries bce with the composition of Padapāṭhas (word-for-word recitation of Vedic texts where phonological rules generally are not applied). It took various forms over these 26 centuries and involved different languages (Ancient, Middle, and Modern Indo-Aryan as well as Dravidian languages). The greater part of documented thought is related to Sanskrit (Ancient Indo-Aryan). Very early, the oral transmission of sacred texts—the Vedas, composed in Vedic Sanskrit—made it necessary to develop techniques based on a subtle analysis of language. The Vedas also—but presumably later—gave birth to bodies of knowledge dealing with language, which are traditionally called Vedāṅgas: phonetics (śikṣā), metrics (chandas), grammar (vyākaraṇa), and semantic explanation (nirvacana, nirukta). Later on, Vedic exegesis (mīmāṃsā), new dialectics (navya-nyāya), lexicography, and poetics (alaṃkāra) also contributed to linguistic thought. Though languages other than Sanskrit were described in premodern India, the grammatical description of Sanskrit—given in Sanskrit—dominated and influenced them more or less strongly. Sanskrit grammar (vyākaraṇa) has a long history marked by several major steps (Padapāṭha versions of Vedic texts, Aṣṭādhyāyī of Pāṇini, Mahābhāṣya of Patañjali, Bhartṛhari’s works, Siddhāntakaumudī of Bhaṭṭoji Dīkṣita, Nāgeśa’s works), and the main topics it addresses (minimal meaning-bearer units, classes of words, relation between word and meaning/referent, the primary meaning/referent of nouns) are still central issues for contemporary linguistics.

Article

Otto Zwartjes

Missionary dictionaries are printed books or manuscripts compiled by missionaries in which words are listed systematically followed by words which have the same meaning in another language. These dictionaries were mainly written as tools for language teaching and learning in a missionary-colonial setting, although quite a few dictionaries have also a more encyclopedic character, containing invaluable information on non-Western cultures from all continents. In this article, several types of dictionaries are analyzed: bilingual-monodirectional, bilingual and bidirectional, and multilingual. Most examples are taken from an illustrative selected corpus of missionary dictionaries describing non-Western and languages during the colonial period, with particular focus on the function of these dictionaries in a missionary context, the users, macrostructure, organizational principles, and the typology of the microstructure and markedness in lemmatization.