You are looking at 31-40 of 140 articles
- Linguistics x
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Despite its apparent formal simplicity, to define conversion as a word-formation technique is by no means a simple matter, even in respect of one language, let alone languages representing different typological groups or subgroups. The traditional claim that conversion is a derivationally unmarked word-class changing operation involving formally identical (homonymous) lexical items seems largely justifiable so far as English is concerned where this operation is exclusively word/lexeme-based (cf. to swap > (a) swap, clear > to clear). However, while this same claim is also true for Hungarian, a Finno-Ugric language (cf. este
To determine the linguistic nature of conversion and its place among other types of word formation is not a simple matter either, and, paradoxically, it is especially so in the case of the most extensively studied English conversion. The reasons for this to a great extent lie in the fact that practically each element of the traditional definition suggested in the previous paragraph has been called into question, giving rise to a diversity of interpretations of conversion not only in English, but also in a cross-linguistic perspective. Thus, if conversion is viewed as a kind of derivation, the assumptions can be made that being derivationally unmarked means either the presence of a zero formative, or, alternatively, the lack of any overt derivational marking on the converted item (consider for instance the English, Hungarian, German, and Old English examples above). Regardless of their long-debated justifiability, what these assumptions respectively suggest is that conversion after all should be treated either as a kind of derivation, namely zero derivation, or as a self-contained word-formation process different from derivation (affixation). In addition, being derivationally unmarked is also viewed in the corresponding literature as the absence of derivation altogether; and the suggestion is made that during conversion it is in effect the change in the inflectional paradigm that can only signal word-class shift. Because of this, so the argument goes, conversion should be seen as an inflectional and not as a derivational process.
The notion of word class itself and the uncertainties characterizing its understanding present further challenges to morphologists dealing with conversion. Concretely, it is a widely shared view that only the unmarked change of the entire word class can be recognized as conversion (see the examples above). However, there are opinions that insist that the change of a subclass or subcategory also qualifies as conversion, albeit partial or non-prototypical (cf. to run
Finally, treatments of conversion that focus on underlying semantic or conceptual motivations further add to the diversity of views of conversion. These treatments draw on the fact that there is a strong semantic link between the input and the output in the sense that normally the meaning of the latter is semantically derived (predictable) from that of the former. It is argued that this semantic link between the pair words of conversion is based on various types of conceptual, predominantly metonymic shifts whereby extralinguistic entities such as actions, instruments, properties, natural kinds, etc., undergo cognitive reanalyses (cf. instrument as action, property as action, action as actor/place) driven by the communicative needs of interlocutors. Consequently, along with the interpretations mentioned in the previous paragraphs, conversion can also be considered a word-formation process motivated by different types of conceptual shifts between formally identical input and output items.
The term coordination refers to the juxtaposition of two or more conjuncts often linked by a conjunction such as and or or. The conjuncts (e.g., our friend and your teacher in Our friend and your teacher sent greetings) may be words or phrases of any type. They are a defining property of coordination, while the presence or absence of a conjunction depends on the specifics of the particular language. As a general phenomenon, coordination differs from subordination in that the conjuncts are typically symmetric in many ways: they often belong to like syntactic categories, and if nominal, each carries the same case. Additionally, if there is extraction, this must typically be out of all conjuncts in parallel, a phenomenon known as Across-the-Board extraction. Extraction of a single conjunct, or out of a single conjunct, is prohibited by the Coordinate Structure Constraint. Despite this overall symmetry, coordination does sometimes behave in an asymmetric fashion. Under certain circumstances, the conjuncts may be of unlike categories or extraction may occur out of one conjunct, but not another, thus yielding apparent violations of the Coordinate Structure Constraint. In addition, case and agreement show a wide range of complex and sometimes asymmetric behavior cross-linguistically. This tension between the symmetric and asymmetric properties of coordination is one of the reasons that coordination has remained an interesting analytical puzzle for many decades.
Within the general area of coordination, a number of specific sentence types have generated much interest. One is Gapping, in which two sentences are conjoined, but material (often the verb) is missing from the middle of the second conjunct, as in Mary ate beans and John _ potatoes. Another is Right Node Raising, in which shared material from the right edge of sentential conjuncts is placed in the right periphery of the entire sentence, as in The chefs prepared __ and the customers ate __ [a very elaborately constructed dessert]. Finally, some languages have a phenomenon known as comitative coordination, in which a verb has two arguments, one morphologically plural and the other comitative (e.g., with the preposition with), but the plural argument may be understood as singular. English does not have this phenomenon, but if it did, a sentence like We went to the movies with John could be understood as John and I went to the movies.
Marcel den Dikken and Teresa O’Neill
Copular sentences (sentences of the form A is B) have been prominent on the research agenda for linguists and philosophers of language since classical antiquity, and continue to be shrouded in considerable controversy. Central questions in the linguistic literature on copulas and copular sentences are (a) whether predicational, specificational, identificational, and equative copular sentences have a common underlying source; and, if so, (b) how the various surface types of copular sentences are derived from that underlier; (c) whether there is a typology of copulas; and (d) whether copulas are meaningful or meaningless.
The debate surrounding the postulation of multiple copular sentence types relies on criteria related to both meaning and form. Analyses based on meaning tend to focus on the question of whether or not one of the terms is a predicate of the other, whether or not the copula contributes meaning, and the information-structural properties of the construction. Analyses based on form focus on the flexibility of the linear ordering of the two terms of the construction, the surface distribution of the copular element, the restrictions imposed on the extraction of the two terms, the case and agreement properties of the construction, the omissibility of the copula or one of the two terms, and the connectivity effects exhibited by the construction.
Morphosyntactic variation in the domain of copular elements is an area of research with fruitful intersections between typological and generative approaches. A variety of criteria are presented in the literature to justify the postulation of multiple copulas or underlying representations for copular sentences. Another prolific body of research concerns the semantics of copular sentences. In the assessment of scholarship on copulas and copular sentences, the article critiques the ‘multiple copulas’ approach and examines ways in which the surface variety of copular sentence types can be accounted for in a ‘single copula’ analysis. The analysis of copular constructions continues to have far-reaching consequences in the context of linguistic theory construction, particularly the question of how a predicate combines with its subject in syntactic structure.
Corpus Phonology is an approach to phonology that places corpora at the center of phonological research. Some practitioners of corpus phonology see corpora as the only object of investigation; others use corpora alongside other available techniques (for instance, intuitions, psycholinguistic and neurolinguistic experimentation, laboratory phonology, the study of the acquisition of phonology or of language pathology, etc.). Whatever version of corpus phonology one advocates, corpora have become part and parcel of the modern research environment, and their construction and exploitation has been modified by the multidisciplinary advances made within various fields. Indeed, for the study of spoken usage, the term ‘corpus’ should nowadays only be applied to bodies of data meeting certain technical requirements, even if corpora of spoken usage are by no means new and coincide with the birth of recording techniques. It is therefore essential to understand what criteria must be met by a modern corpus (quality of recordings, diversity of speech situations, ethical guidelines, time-alignment with transcriptions and annotations, etc.) and what tools are available to researchers. Once these requirements are met, the way is open to varying and possibly conflicting uses of spoken corpora by phonological practitioners. A traditional stance in theoretical phonology sees the data as a degenerate version of a more abstract underlying system, but more and more researchers within various frameworks (e.g., usage-based approaches, exemplar models, stochastic Optimality Theory, sociophonetics) are constructing models that tightly bind phonological competence to language use, rely heavily on quantitative information, and attempt to account for intra-speaker and inter-speaker variation. This renders corpora essential to phonological research and not a mere adjunct to the phonological description of the languages of the world.
Creole languages have a curious status in linguistics, and at the same time they often have very low prestige in the societies in which they are spoken. These two facts may be related, in part because they circle around notions such as “derived from” or “simplified” instead of “original.” Rather than simply taking the notion of “creole” as a given and trying to account for its properties and origin, this essay tries to explore the ways scholars have dealt with creoles. This involves, in particular, trying to see whether we can define “creoles” as a meaningful class of languages. There is a canonical list of languages that most specialists would not hesitate to call creoles, but the boundaries of the list and the criteria for being listed are vague. It also becomes difficult to distinguish sharply between pidgins and creoles, and likewise the boundaries between some languages claimed to be creoles and their lexifiers are rather vague.
Several possible criteria to distinguish creoles will be discussed. Simply defining them as languages of which we know the point of birth may be a necessary, but not sufficient, criterion. Displacement is also an important criterion, necessary but not sufficient. Mixture is often characteristic of creoles, but not crucial, it is argued. Essential in any case is substantial restructuring of some lexifier language, which may take the form of morphosyntactic simplification, but it is dangerous to assume that simplification always has the same outcome. The combination of these criteria—time of genesis, displacement, mixture, restructuring—contributes to the status of a language as creole, but “creole” is far from a unified notion. There turn out to be several types of creoles, and then a whole bunch of creole-like languages, and they differ in the way these criteria are combined with respect to them.
Thus the proposal is made here to stop looking at creoles as a separate class, but take them as special cases of the general phenomenon that the way languages emerge and are used to a considerable extent determines their properties. This calls for a new, socially informed typology of languages, which will involve all kinds of different types of languages, including pidgins and creoles.
Cyclicity in syntax constitutes a property of derivations in which syntactic operations apply bottom-up in the production of ever larger constituents. The formulation of a principle of grammar that guarantees cyclicity depends on whether structure is built top-down with phrase structure rules or bottom-up with a transformation Merge. Considerations of minimal and efficient computation motivate the latter, as well as the formulation of the cyclic principle as a No Tampering Condition on structure-building operations (Section 3.3) without any reference to special cyclic domains in which operations apply (as in the formulation of the Strict Cycle Condition (Section 2) and its predecessors (Section 1)) or any reference to extending a phrase marker (the Extension Condition (Section 3)). Ultimately, the empirical effects of a No Tampering Condition on structure building, which conform to strict cyclicity, follow from the formulation of the Merge operation as strictly binary. This leaves as open questions whether displacement (movement) must involve covert intermediate steps (successive cyclic movement) and whether derivations of the two separate interface representations (Phonetic Form and Logical Form) occur in parallel as a single cycle.
William F. Hanks
Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus,
(1)A:Oh, there’sthat guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy)
(2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer)
In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?
Dene-Yeniseian is a proposed genealogical link between the widespread North American language family Na-Dene (Athabaskan, Eyak, Tlingit) and Yeniseian in central Siberia, represented today by the critically endangered Ket and several documented extinct relatives. The Dene-Yeniseian hypothesis is an old idea, but since 2006 new evidence supporting it has been published in the form of shared morphological systems and a modest number of lexical cognates showing interlocking sound correspondences. Recent data from human genetics and folklore studies also increasingly indicate the plausibility of a prehistoric (probably Late Pleistocene) connection between populations in northwestern North America and the traditionally Yeniseian-speaking areas of south-central Siberia. At present, Dene-Yeniseian cannot be accepted as a proven language family until the purported evidence supporting the lexical and morphological correspondences between Yeniseian and Na-Dene is expanded and tested by further critical analysis and their relationship to Old World families such as Sino-Tibetan and Caucasian, as well as the isolate Burushaski (all earlier proposed as relatives of Yeniseian, and sometimes also of Na-Dene), becomes clearer.
Željko Bošković and Troy Messick
Economy considerations have always played an important role in the generative theory of grammar. They are particularly prominent in the most recent instantiation of this approach, the Minimalist Program, which explores the possibility that Universal Grammar is an optimal way of satisfying requirements that are imposed on the language faculty by the external systems that interface with the language faculty which is also characterized by optimal, computationally efficient design. In this respect, the operations of the computational system that produce linguistic expressions must be optimal in that they must satisfy general considerations of simplicity and efficient design. Simply put, the guiding principles here are (a) do something only if you need to and (b) if you do need to, do it in the most economical/efficient way. These considerations ban superfluous steps in derivations and superfluous symbols in representations. Under economy guidelines, movement takes place only when there is a need for it (with both syntactic and semantic considerations playing a role here), and when it does take place, it takes place in the most economical way: it is as short as possible and carries as little material as possible. Furthermore, economy is evaluated locally, on the basis of immediately available structure. The locality of syntactic dependencies is also enforced by minimal search and by limiting the number of syntactic objects and the amount of structure accessible in the derivation. This is achieved by transferring parts of syntactic structure to the interfaces during the derivation, the transferred parts not being accessible for further syntactic operations.