1-10 of 29 Results  for:

  • History of Linguistics x
Clear all

Article

Markku Filppula and Juhani Klemola

Few European languages have in the course of their histories undergone as radical changes as English did in the medieval period. The earliest documented variety of the language, Old English (c. 450 to 1100 ce), was a synthetic language, typologically similar to modern German, with its three genders, relatively free word order, rich case system, and verbal morphology. By the beginning of the Middle English period (c. 1100 to 1500), changes that had begun a few centuries earlier in the Old English period had resulted in a remarkable typological shift from a synthetic language to an analytic language with fixed word order, very few inflections, and a heavy reliance on function words. System-internal pressures had a role to play in these changes, but arguably they were primarily due to intensive contacts with other languages, including Celtic languages, (British) Latin, Scandinavian languages, and a little later, French. As a result, English came to diverge from its Germanic sister languages, losing or reducing such Proto-Germanic features as grammatical gender; most inflections on nouns, adjectives, pronouns, and verbs; verb-second syntax; and certain types of reflexive marking. Among the external influences, long contacts with speakers of especially Brittonic Celtic languages (i.e., Welsh, Cornish, and Cumbrian) can be considered to have been of particular importance. Following the arrival of the Angles, Saxons, and Jutes from around 450 ce onward, there began an intensive and large-scale process of language shift on the part of the indigenous Celtic and British Latin speaking population in Britain. A general wisdom in contact linguistics is that in such circumstances—when the contact is intensive and the shifting population large enough—the acquired language (in this case English) undergoes moderate to heavy restructuring of its grammatical system, leading generally to simplification of its morphosyntax. In the history of English, this process was also greatly reinforced by the Viking invasions, which started in the late 8th century ce, and brought a large Scandinavian-speaking population to Britain. The resulting contacts between the Anglo-Saxons and the Vikings also contributed to the decrease of complexity of the Old English morphosyntax. In addition, the Scandinavian settlements of the Danelaw area left their permanent mark in place-names and dialect vocabulary in especially the eastern and northern parts of the country. In contrast to syntactic influences, which are typical of conditions of language shift, contacts that are less intensive and involve extensive bilingualism generally lead to lexical borrowing. This was the situation following the Norman Conquest of Britain in 1066 ce. It led to an influx of French loanwords into English, most of which have persisted in use up to the present day. It has been estimated that almost one third of the present-day English vocabulary is of French origin. By comparison, there is far less evidence of French influence on “core” English syntax. The earliest loanwords were superimposed by the French-speaking new nobility and pertained to administration, law, military terminology, and religion. Cultural prestige was the prime motivation for the later medieval borrowings.

Article

André Thibault and Nicholas LoVecchio

The Romance languages have been involved in many situations of language contact. While language contact is evident at all levels, the most visible effects on the system of the recipient language concern the lexicon. The relationship between language contact and the lexicon raises some theoretical issues that are not always adequately addressed, including in etymological lexicography. First is the very notion of what constitutes “language contact.” Contrary to a somewhat dated view, language contact does not necessarily imply physical presence, contemporaneity, and orality: as far as the lexicon is concerned, contact can happen over time and space, particularly through written media. Depending on the kind of extralinguistic circumstances at stake, language contact can be induced by diverse factors, leading to different forms of borrowing. The misleading terms borrowings or loans mask the reality that these are actually adapted imitations—whether formal, semantic, or both—of a foreign model. Likewise, the common Latin or Greek origins of a huge proportion of the Romance lexicon often obscure the real history of words. As these classical languages have contributed numerous technical and scientific terms, as well as a series of “roots,” words coined in one Romance language can easily be reproduced in any other. However, simply reducing a word’s etymology to the origin of its components (classic or otherwise), ignoring intermediate stages and possibly intermediating languages in the borrowing process, is a distortion of word history. To the extent that it is useful to refer to “internationalisms,” related words in different Romance languages merit careful, often arduous research in the process of identifying the actual origin of a given coining. From a methodological point of view, it is crucial to distinguish between the immediate lending language and the oldest stage that can be identified, with the former being more relevant in a rigorous approach to comparative historical lexicology. Concrete examples from Ibero-Romania, Gallo-Romania, Italo-Romania, and Balkan-Romania highlight the variety of different Romance loans and reflect the diverse historical factors particular to each linguistic community in which borrowing occurred.

Article

Béatrice Godart-Wendling

The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages: The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth. The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context. The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–). The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.

Article

An agent noun is a derived noun whose general meaning is ‘person who does . . .’. It is thus characterized by the feature [+ Human], regardless of whether the person involved actually performs an action (e.g., French nageur ‘swimmer’, i.e., ‘a person who swims’), carries out a profession (e.g., Spanish cabrero ‘goatherd’, i.e., ‘a person who looks after goats’), adheres to a certain ideology or group (e.g., Italian femminista ‘feminist’, i.e., ‘a person who supports or follows the feminist movement’), and so on. Agent nouns are for the most part denominal (as with cabrero and femminista above) and deverbal (as with nageur above). Latin denominal agent nouns were mainly formed with -arius, though the Latin agentive suffix par excellence was -tor, which derived nouns from verbs. Latin denominal agents were also formed with -ista, a borrowing from Greek -ιστήϛ. The reflexes of all three suffixes are widespread and highly productive in the Romance languages, as in the case of Portuguese/Spanish/Catalan/Occitan pescador ‘fisherman’ (-dor < -torem), French boucher ‘butcher’ (-er < -arium), and Romanian flautist (-ist < -ista). At any rate, the distinction between denominal and deverbal agent nouns is not always straightforward, as demonstrated by the Romance forms connected with the Latin present particle -nte, for whereas the majority display a verbal base (e.g., Italian cantante ‘singer’ ← cantare ‘to sing’), there are some which do not (e.g., Italian bracciante ‘hired hand’ ← braccio ‘arm’), thus allowing them to be regarded as denominal derivations. A minor group of agent nouns is made up of deadjectival derivations, often conveying a pejorative meaning; such is the case with Italian elegantone ‘person of overblown elegance’ (← elegante ‘elegant’) and French richard ‘very rich person’ (← riche ‘rich’).

Article

Several attempts have been made to classify Romance languages. The subgroups created can be posited as intermediate entities in diachrony between a mother language and daughter languages. This diachronic perspective can be structured using a rigid model, such as that of the family tree, or more flexible ones. In general, this perspective yields a bipartite division between Western Romance languages (Ibero-Romance, Gallo-Romance, Alpine-, and Cisalpine-Romance) and Eastern Romance languages (Italian and Romanian), or a tripartite split between Sardinian, Romanian, and other languages. The subgroups can, however, be considered synchronic groupings based on the analysis of the characteristics internal to the varieties. Naturally, the groupings change depending on which features are used and which theoretic model is adopted. Still, this type of approach signals the individuality of French and Romanian with respect to the Romània continua, or contrasts northern and southern Romània, highlighting, on the one hand, the shared features in Gallo-Romance and Gallo-Italian and, on the other, those common to Ibero-Romance, southern Italian, and Sardinian. The task of classifying Romance languages includes thorny issues such as distinguishing between synchrony and diachrony, language and dialect, and monothetic and polythetic classification. Moreover, ideological and political matters often complicate the theme of classification. Many problems stand as yet unresolved, and they will probably remain unresolvable.

Article

David Fertig

Analogy is traditionally regarded as one of the three main factors responsible for language change, along with sound change and borrowing. Whereas sound change is understood to be phonetically motivated and blind to structural patterns and semantic and functional relationships, analogy is licensed precisely by those patterns and relationships. In the Neogrammarian tradition, analogical change is regarded, at least largely, as a by-product of the normal operation (acquisition, representation, and use) of the mental grammar. Historical linguists commonly use proportional equations of the form A : B = C : X to represent analogical innovations, where A, B, and C are (sets of) word forms known to the innovator, who solves for X by discerning a formal relationship between A and B and then deductively arriving at a form that is related to C in the same way that B is related to A. Along with the core type of analogical change captured by proportional equations, most historical linguists include a number of other phenomena under the analogy umbrella. Some of these, such as paradigm leveling—the reduction or elimination of stem alternations in paradigms—are arguably largely proportional, but others such as contamination and folk etymology seem to have less to do with the normal operation of the mental grammar and instead involve some kind of interference among the mental representations of phonetically or semantically similar forms. The Neogrammarian approach to analogical change has been criticized and challenged on a variety of grounds, and a number of important scholars use the term “analogy” in a rather different sense, to refer to the role that phonological and/or semantic similarity play in the influence that forms exert on each other.

Article

Morphology, understood as internal structure of words, has always figured prominently in linguistic typology, and it is with the morphological classification of languages into “fusional,” “agglutinating,” and “isolating” proposed by the linguists and philosophers of the early 19th century that the advent of typology is often associated. However, since then typology has shifted its interests toward mapping the individual parameters of cross-linguistic diversity and looking for correlations between them rather than classifying languages into idealized “types” and to syntactically and semantically centered inquiries. Since the second half of the 20th century, morphology has been viewed as just one possible type of expression of meaning or syntactic function, often too idiosyncratic to yield to any interesting cross-linguistic let alone universal generalizations. Such notions as “flexive” or “agglutinating” have proven to be ill-defined and requiring revision in terms of more primitive logically independent and empirically uncorrelated parameters. Moreover, well-founded doubts have been cast upon such basic notions as “word,” “affix,” and the like, which have notoriously resisted adequate cross-linguistically applicable definitions, and the same has been the fate of still popular concepts like “inflection” and “derivation.” On the other hand, most theoretically oriented work on morphology, concerned with both individual languages and cross-linguistic comparison, has largely abandoned the traditional morpheme-based approaches of the American structuralists of the first half of the 20th century, shifting its attention to paradigmatic relations between morphologically relevant units, which themselves can be larger than traditional words. These developments suggest a reassessment of the basic notions and analytic approaches of morphological typology. Instead of sticking to crude and possibly misleading notions such as “word” or “derivation,” it is necessary to carefully define more primitive and empirically better-grounded notions and parameters of cross-linguistic variation in the domains of both syntagmatics and paradigmatics, to plot the space of possibilities defined by these parameters, and to seek possible correlations between them as well as explanations of these correlations or of the lack thereof.

Article

The morpheme was the central notion in morphological theorizing in the 20th century. It has a very intuitive appeal as the indivisible and invariant unit of form and meaning, a minimal linguistic sign. Ideally, that would be all there is to build words and sentences from. But this ideal does not appear to be entirely adequate. At least at a perhaps superficial understanding of form as a series of phonemes, and of meaning as concepts and morphosyntactic feature sets, the form and the meaning side of words are often not structured isomorphically. Different analytical reactions are possible to deal with the empirical challenges resulting from the various kinds of non-isomorphism between form and meaning. One prominent option is to reject the morpheme and to recognize conceptually larger units such as the word or the lexeme and its paradigm as the operands of morphological theory. This contrasts with various theoretical options maintaining the morpheme, terminologically or at least conceptually at some level. One such option is to maintain the morpheme as a minimal unit of form, relaxing the tension imposed by the meaning requirement. Another option is to maintain it as a minimal morphosyntactic unit, relaxing the requirements on the form side. The latter (and to a lesser extent also the former) has been understood in various profoundly different ways: association of one morpheme with several form variants, association of a morpheme with non-self-sufficient phonological units, or association of a morpheme with a formal process distinct from affixation. Variants of all of these possibilities have been entertained and have established distinct schools of thought. The overall architecture of the grammar, in particular the way that the morphology integrates with the syntax and the phonology, has become a driving force in the debate. If there are morpheme-sized units, are they pre-syntactic or post-syntactic units? Is the association between meaning and phonological information pre-syntactic or post-syntactic? Do morpheme-sized pieces have a specific status in the syntax? Invoking some of the main issues involved, this article draws a profile of the debate, following the term morpheme on a by-and-large chronological path from the late 19th century to the 21st century.

Article

Ever since the fundamental studies carried out by the great German Romanist Max Leopold Wagner (b. 1880–d. 1962), the acknowledged founder of scientific research on Sardinian, the lexicon has been, and still is, one of the most investigated and best-known areas of the Sardinian language. Several substrate components stand out in the Sardinian lexicon around a fundamental layer which has a clear Latin lexical background. The so-called Paleo-Sardinian layer is particularly intriguing. This is a conventional label for the linguistic varieties spoken in the prehistoric and protohistoric ages in Sardinia. Indeed, the relatively large amount of words (toponyms in particular) which can be traced back to this substrate clearly distinguishes the Sardinian lexicon within the panorama of the Romance languages. As for the other Pre-Latin substrata, the Phoenician-Punic presence mainly (although not exclusively) affected southern and western Sardinia, where we find the highest concentration of Phoenician-Punic loanwords. On the other hand, recent studies have shown that the Latinization of Sardinia was more complex than once thought. In particular, the alleged archaic nature of some features of Sardinian has been questioned. Moreover, research carried out in recent decades has underlined the importance of the Greek Byzantine superstrate, which has actually left far more evident lexical traces than previously thought. Finally, from the late Middle Ages onward, the contributions from the early Italian, Catalan, and Spanish superstrates, as well as from modern and contemporary Italian, have substantially reshaped the modern-day profile of the Sardinian lexicon. In these cases too, more recent research has shown a deeper impact of these components on the Sardinian lexicon, especially as regards the influence of Italian.

Article

The Early Modern interest taken in language was intense and versatile. In this period, language education gradually no longer centered solely on Latin. The linguistic scope widened considerably, partly as a result of scholarly curiosity, although religious and missionary zeal, commercial considerations, and political motives were also of decisive significance. Statesmen discovered the political power of standardized vernaculars in the typically Early Modern process of state formation. The widening of the linguistic horizon was, first and foremost, reflected in a steadily increasing production of grammars and dictionaries, along with pocket textbooks, conversational manuals, and spelling treatises. One strategy of coping with the stunning linguistic diversity consisted of first collecting data on as many languages as possible and then tracing elements that were common to all or to certain groups of languages. Language comparison was not limited to historical and genealogical endeavors, as scholars started also to compare a number of languages in terms of their alleged vices and qualities. Another way of dealing with the flood of linguistic data consisted of focusing on what the different languages had in common, which led to the development of general grammars, of which the 17th-century Port-Royal grammar is the most well-known. During the Enlightenment, the nature of language and its cognitive merits or vices became also a central theme in philosophical debates in which major thinkers were actively engaged.