Ans van Kemenade
The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.
Alan Reed Libert
Artificial languages—languages which have been consciously designed—have been created for more than 900 years, although the number of them has increased considerably in recent decades, and by the early 21st century the total figure probably was in the thousands. There have been several goals behind their creation; the traditional one (which applies to some of the best-known artificial languages, including Esperanto) is to make international communication easier. Some other well-known artificial languages, such as Klingon, have been designed in connection with works of fiction. Still others are simply personal projects.
A traditional way of classifying artificial languages involves the extent to which they make use of material from natural languages. Those artificial languages which are created mainly by taking material from one or more natural languages are called a posteriori languages (which again include well-known languages such as Esperanto), while those which do not use natural languages as sources are a priori languages (although many a posteriori languages have a limited amount of a priori material, and some a priori languages have a small number of a posteriori components). Between these two extremes are the mixed languages, which have large amounts of both a priori and a posteriori material. Artificial languages can also be classified typologically (as natural languages are) and by how and how much they have been used.
Many linguists seem to be biased against research on artificial languages, although some major linguists of the past have been interested in them.
The grammatization of European vernacular languages began in the Late Middle Ages and Renaissance and continued up until the end of the 18th century. Through this process, grammars were written for the vernaculars and, as a result, the vernaculars were able to establish themselves in important areas of communication. Vernacular grammars largely followed the example of those written for Latin, using Latin descriptive categories without fully adapting them to the vernaculars. In accord with the Greco-Latin tradition, the grammars typically contain sections on orthography, prosody, morphology, and syntax, with the most space devoted to the treatment of word classes in the section on “etymology.” The earliest grammars of vernaculars had two main goals: on the one hand, making the languages described accessible to non-native speakers, and on the other, supporting the learning of Latin grammar by teaching the grammar of speakers’ native languages. Initially, it was considered unnecessary to engage with the grammar of native languages for their own sake, since they were thought to be acquired spontaneously. Only gradually did a need for normative grammars develop which sought to codify languages. This development relied on an awareness of the value of vernaculars that attributed a certain degree of perfection to them. Grammars of indigenous languages in colonized areas were based on those of European languages and today offer information about the early state of those languages, and are indeed sometimes the only sources for now extinct languages. Grammars of vernaculars came into being in the contrasting contexts of general grammar and the grammars of individual languages, between grammar as science and as art and between description and standardization. In the standardization of languages, the guiding principle could either be that of anomaly, which took a particular variety of a language as the basis of the description, or that of analogy, which permitted interventions into a language aimed at making it more uniform.
Irit Meir and Oksana Tkachman
Iconicity is a relationship of resemblance or similarity between the two aspects of a sign: its form and its meaning. An iconic sign is one whose form resembles its meaning in some way. The opposite of iconicity is arbitrariness. In an arbitrary sign, the association between form and meaning is based solely on convention; there is nothing in the form of the sign that resembles aspects of its meaning. The Hindu-Arabic numerals 1, 2, 3 are arbitrary, because their current form does not correlate to any aspect of their meaning. In contrast, the Roman numerals I, II, III are iconic, because the number of occurrences of the sign I correlates with the quantity that the numerals represent. Because iconicity has to do with the properties of signs in general and not only those of linguistic signs, it plays an important role in the field of semiotics—the study of signs and signaling. However, language is the most pervasive symbolic communicative system used by humans, and the notion of iconicity plays an important role in characterizing the linguistic sign and linguistic systems. Iconicity is also central to the study of literary uses of language, such as prose and poetry.
There are various types of iconicity: the form of a sign may resemble aspects of its meaning in several ways: it may create a mental image of the concept (imagic iconicity), or its structure and the arrangement of its elements may resemble the structural relationship between components of the concept represented (diagrammatic iconicity). An example of the first type is the word cuckoo, whose sounds resemble the call of the bird, or a sign such as RABBIT in Israeli Sign Language, whose form—the hands representing the rabbit's long ears—resembles a visual property of that animal. An example of diagrammatic iconicity is vēnī, vīdī, vīcī, where the order of clauses in a discourse is understood as reflecting the sequence of events in the world.
Iconicity is found on all linguistic levels: phonology, morphology, syntax, semantics, and discourse. It is found both in spoken languages and in sign languages. However, sign languages, because of the visual-gestural modality through which they are transmitted, are much richer in iconic devices, and therefore offer a rich array of topics and perspectives for investigating iconicity, and the interaction between iconicity and language structure.
Missionary dictionaries are printed books or manuscripts compiled by missionaries in which words are listed systematically followed by words which have the same meaning in another language. These dictionaries were mainly written as tools for language teaching and learning in a missionary-colonial setting, although quite a few dictionaries have also a more encyclopedic character, containing invaluable information on non-Western cultures from all continents. In this article, several types of dictionaries are analyzed: bilingual-monodirectional, bilingual and bidirectional, and multilingual. Most examples are taken from an illustrative selected corpus of missionary dictionaries describing non-Western and languages during the colonial period, with particular focus on the function of these dictionaries in a missionary context, the users, macrostructure, organizational principles, and the typology of the microstructure and markedness in lemmatization.
Missionary grammars are printed books or manuscripts compiled by missionaries in which a particular language is described. These grammars were mainly written as pedagogical tools for language teaching and learning in a missionary-colonial setting, although quite a few grammars have also a more normative character. Missionary grammars contain usually an opening section, a prologue, in which the author exhibits the objectives of his work. The first part is usually a short introduction into phonology and orthography, followed by the largest section, which is devoted to morphology, arranged according to the traditional division of the parts of speech. The final section is sometimes devoted to syntax, but the topics included can vary considerably. Sometimes word lists are appended, containing body parts, measures, counting, manners of speaking, or rhetorical figures. The data presented in the grammar are mainly based on an oral corpus, whereas in other cases high registers from prestigious texts are used in which the eloquence or elegance of the language under study is illustrated. These grammars are modeled according to the traditional Greco-Latin framework and often contain invaluable information regarding language typologies, semantics, and pragmatics. In the New World, Asia, and elsewhere, missionaries had to find an adequate methodology in order to describe typological features they had never seen before. They adapted European models to new linguistic realities and created original works which deserve our attention within the discipline of the history of linguistics alongside contemporary pedagogical works written in Europe. This article concentrates on sources written in Spanish, Portuguese, and Latin during the colonial period, since these sources outnumber the production of missionary grammars in other languages.
Indian linguistic thought begins around the 8th–6th centuries
The greater part of documented thought is related to Sanskrit (Ancient Indo-Aryan). Very early, the oral transmission of sacred texts—the Vedas, composed in Vedic Sanskrit—made it necessary to develop techniques based on a subtle analysis of language. The Vedas also—but presumably later—gave birth to bodies of knowledge dealing with language, which are traditionally called Vedāṅgas: phonetics (śikṣā), metrics (chandas), grammar (vyākaraṇa), and semantic explanation (nirvacana, nirukta). Later on, Vedic exegesis (mīmāṃsā), new dialectics (navya-nyāya), lexicography, and poetics (alaṃkāra) also contributed to linguistic thought.
Though languages other than Sanskrit were described in premodern India, the grammatical description of Sanskrit—given in Sanskrit—dominated and influenced them more or less strongly. Sanskrit grammar (vyākaraṇa) has a long history marked by several major steps (Padapāṭha versions of Vedic texts, Aṣṭādhyāyī of Pāṇini, Mahābhāṣya of Patañjali, Bhartṛhari’s works, Siddhāntakaumudī of Bhaṭṭoji Dīkṣita, Nāgeśa’s works), and the main topics it addresses (minimal meaning-bearer units, classes of words, relation between word and meaning/referent, the primary meaning/referent of nouns) are still central issues for contemporary linguistics.
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.
Howard Lasnik and Terje Lohndal
Noam Avram Chomsky is one of the central figures of modern linguistics. He was born in Philadelphia, Pennsylvania on December 7, 1928. In 1945, Chomsky enrolled in the University of Pennsylvania, where he met Zellig Harris (1909–1992), a leading Structuralist, through their shared political interests. His first encounter with Harris’s work was when he proof-read Harris’s book Methods in Structural Linguistics, published in 1951 but completed already in 1947. Chomsky grew dissatisfied with Structuralism and started to develop his own major idea that syntax and phonology are in part matters of abstract representations. This was soon combined with a psychobiological view of language as a unique part of the mind/brain.
Chomsky spent 1951–1955 as a Junior Fellow of the Harvard Society of Fellows, after which he joined the faculty at MIT under the sponsorship of Morris Halle. He was promoted to full professor of Foreign Languages and Linguistics in 1961, appointed Ferrari Ward Professor of Linguistics in 1966, and Institute Professor in 1976, retiring in 2002. Chomsky is still remarkably active, publishing, teaching, and lecturing across the world.
In 1967, both the University of Chicago and the University of London awarded him honorary degrees, and since then he has been the recipient of scores of honors and awards. In 1988, he was awarded the Kyoto Prize in basic science, created in 1984 in order to recognize work in areas not included among the Nobel Prizes. These honors are all a testimony to Chomsky’s influence and impact in linguistics and cognitive science more generally over the past 60 years. His contributions have of course also been heavily criticized, but nevertheless remain crucial to investigations of language.
Chomsky’s work has always centered around the same basic questions and assumptions, especially that human language is an inherent property of the human mind. The technical part of his research has continuously been revised and updated. In the 1960s phrase structure grammars were developed into what is known as the Standard Theory, which transformed into the Extended Standard Theory and X-bar theory in the 1970s. A major transition occurred at the end of the 1970s, when the Principles and Parameters Theory emerged. This theory provides a new understanding of the human language faculty, focusing on the invariant principles common to all human languages and the points of variation known as parameters. Its recent variant, the Minimalist Program, pushes the approach even further in asking why grammars are structured the way they are.
Traditional Chinese linguistics grew out of two distinct interests in language: the philosophical reflection on things and their names, and the practical concern for literacy education and the correct understanding of classical works. The former is most typically found in the teachings of such pre-Qin masters as Confucius, Mozi, and Gongsun Long, who lived between the 6th and 3rd centuries
The picture just presented, in which Chinese philosophy and philology are combined to form a seemingly autonomous tradition, is complicated, however, by the fact that the Indic linguistic tradition started to influence the Chinese in the 2nd century
Chinese, with its linguistic tradition, had a profound impact in ancient East Asia. Not only did traditional studies of Japanese, Tangut, and other languages show significant Chinese influence, under which not the least achievement was the invention of the earliest writing systems for these languages, but many scholars from Japan and Korea actually took an active part in the study of Chinese as well, so that the Chinese linguistic tradition would itself be incomplete without the materials and findings these non-Chinese scholars have contributed. On the other hand, some of these scholars, most notably Motoori Norinaga and Fujitani Nariakira in Japan, were able to free themselves from the character-centered Chinese routine and develop rather original linguistic theories.