Alan Reed Libert
Artificial languages—languages which have been consciously designed—have been created for more than 900 years, although the number of them has increased considerably in recent decades, and by the early 21st century the total figure probably was in the thousands. There have been several goals behind their creation; the traditional one (which applies to some of the best-known artificial languages, including Esperanto) is to make international communication easier. Some other well-known artificial languages, such as Klingon, have been designed in connection with works of fiction. Still others are simply personal projects.
A traditional way of classifying artificial languages involves the extent to which they make use of material from natural languages. Those artificial languages which are created mainly by taking material from one or more natural languages are called a posteriori languages (which again include well-known languages such as Esperanto), while those which do not use natural languages as sources are a priori languages (although many a posteriori languages have a limited amount of a priori material, and some a priori languages have a small number of a posteriori components). Between these two extremes are the mixed languages, which have large amounts of both a priori and a posteriori material. Artificial languages can also be classified typologically (as natural languages are) and by how and how much they have been used.
Many linguists seem to be biased against research on artificial languages, although some major linguists of the past have been interested in them.
Ans van Kemenade
The status of English in the early 21st century makes it hard to imagine that the language started out as an assortment of North Sea Germanic dialects spoken in parts of England only by immigrants from the continent. Itself soon under threat, first from the language(s) spoken by Viking invaders, then from French as spoken by the Norman conquerors, English continued to thrive as an essentially West-Germanic language that did, however, undergo some profound changes resulting from contact with Scandinavian and French. A further decisive period of change is the late Middle Ages, which started a tremendous societal scale-up that triggered pervasive multilingualism. These repeated layers of contact between different populations, first locally, then nationally, followed by standardization and 18th-century codification, metamorphosed English into a language closely related to, yet quite distinct from, its closest relatives Dutch and German in nearly all language domains, not least in word order, grammar, and pronunciation.
Traditional Chinese linguistics grew out of two distinct interests in language: the philosophical reflection on things and their names, and the practical concern for literacy education and the correct understanding of classical works. The former is most typically found in the teachings of such pre-Qin masters as Confucius, Mozi, and Gongsun Long, who lived between the 6th and 3rd centuries
The picture just presented, in which Chinese philosophy and philology are combined to form a seemingly autonomous tradition, is complicated, however, by the fact that the Indic linguistic tradition started to influence the Chinese in the 2nd century
Chinese, with its linguistic tradition, had a profound impact in ancient East Asia. Not only did traditional studies of Japanese, Tangut, and other languages show significant Chinese influence, under which not the least achievement was the invention of the earliest writing systems for these languages, but many scholars from Japan and Korea actually took an active part in the study of Chinese as well, so that the Chinese linguistic tradition would itself be incomplete without the materials and findings these non-Chinese scholars have contributed. On the other hand, some of these scholars, most notably Motoori Norinaga and Fujitani Nariakira in Japan, were able to free themselves from the character-centered Chinese routine and develop rather original linguistic theories.