You are looking at 41-60 of 244 articles
Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data.
The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique.
A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem.
Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers.
Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data.
It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?
The Word and Paradigm approach to morphology associates lexemes with tables of surface forms for different morphosyntactic property sets. Researchers express their realizational theories, which show how to derive these surface forms, using formalisms such as Network Morphology and Paradigm Function Morphology. The tables of surface forms also lend themselves to a study of the implicative theories, which infer the realizations in some cells of the inflectional system from the realizations of other cells.
There is an art to building realizational theories. First, the theories should be correct, that is, they should generate the right surface forms. Second, they should be elegant, which is much harder to capture, but includes the desiderata of simplicity and expressiveness. Without software to test a realizational theory, it is easy to sacrifice correctness for elegance. Therefore, software that takes a realizational theory and generates surface forms is an essential part of any theorist’s toolbox.
Discovering implicative rules that connect the cells in an inflectional system is often quite difficult. Some rules are immediately apparent, but others can be subtle. Software that automatically analyzes an entire table of surface forms for many lexemes can help automate the discovery process.
Researchers can use Web-based computerized tools to test their realizational theories and to discover implicative rules.
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Construction Morphology is a theory of word structure in which the complex words of a language are analyzed as constructions, that is, systematic pairings of form and meaning. These pairings are analyzed within a Tripartite Parallel Architecture conception of grammar. This presupposes a word-based approach to the analysis of morphological structure and a strong dependence on paradigmatic relations between words. The lexicon contains both words and the constructional schemas they are instantiations of. Words and schemas are organized in a hierarchical network, with intermediate layers of subschemas. These schemas have a motivating function with respect to existing complex words and specify how new complex words can be formed.
The consequence of this view of morphology is that there is no sharp boundary between lexicon and grammar. In addition, the use of morphological patterns may also depend on specific syntactic constructions (construction-dependent morphology).
This theory of lexical relatedness also provides insight into language change such as the use of obsolete case markers as markers of specific constructions, the change of words into affixes, and the debonding of word constituents into independent words. Studies of language acquisition and word processing confirm this view of the lexicon and the nature of lexical knowledge.
Construction Morphology is also well equipped for dealing with inflection and the relationships between the cells of inflectional paradigms, because it can express how morphological schemas are related paradigmatically.
Daniel Currie Hall
The fundamental idea underlying the use of distinctive features in phonology is the proposition that the same phonetic properties that distinguish one phoneme from another also play a crucial role in accounting for phonological patterns. Phonological rules and constraints apply to natural classes of segments, expressed in terms of features, and involve mechanisms, such as spreading or agreement, that copy distinctive features from one segment to another.
Contrastive specification builds on this by taking seriously the idea that phonological features are distinctive features. Many phonological patterns appear to be sensitive only to properties that crucially distinguish one phoneme from another, ignoring the same properties when they are redundant or predictable. For example, processes of voicing assimilation in many languages apply only to the class of obstruents, where voicing distinguishes phonemic pairs such as /t/ and /d/, and ignore sonorant consonants and vowels, which are predictably voiced. In theories of contrastive specification, features that do not serve to mark phonemic contrasts (such as [+voice] on sonorants) are omitted from underlying representations. Their phonological inertness thus follows straightforwardly from the fact that they are not present in the phonological system at the point at which the pattern applies, though the redundant features may subsequently be filled in either before or during phonetic implementation.
In order to implement a theory of contrastive specification, it is necessary to have a means of determining which features are contrastive (and should thus be specified) and which ones are redundant (and should thus be omitted). A traditional and intuitive method involves looking for minimal pairs of phonemes: if [±voice] is the only property that can distinguish /t/ from /d/, then it must be specified on them. This approach, however, often identifies too few contrastive features to distinguish the phonemes of an inventory, particularly when the phonetic space is sparsely populated. For example, in the common three-vowel inventory /i a u/, there is more than one property that could distinguish any two vowels: /i/ differs from /a/ in both place (front versus back or central) and height (high versus low), /a/ from /u/ in both height and rounding, and /u/ from /i/ in both rounding and place.
Because pairwise comparison cannot identify any features as contrastive in such cases, much recent work in contrastive specification is instead based on a hierarchical sequencing of features, with specifications assigned by dividing the full inventory into successively smaller subsets. For example, if the inventory /i a u/ is first divided according to height, then /a/ is fully distinguished from the other two vowels by virtue of being low, and the second feature, either place or rounding, is contrastive only on the high vowels. Unlike pairwise comparison, this approach produces specifications that fully distinguish the members of the underlying inventory, while at the same time allowing for the possibility of cross-linguistic variation in the specifications assigned to similar inventories.
Conversational implicatures (i) are implied by the speaker in making an utterance; (ii) are part of the content of the utterance, but (iii) do not contribute to direct (or explicit) utterance content; and (iv) are not encoded by the linguistic meaning of what has been uttered. In (1), Amelia asserts that she is on a diet, and implicates something different: that she is not having cake.
(1)Benjamin:Are you having some of this chocolate cake?Amelia:I’m on a diet.
Conversational implicatures are a subset of the implications of an utterance: namely those that are part of utterance content. Within the class of conversational implicatures, there are distinctions between particularized and generalized implicatures; implicated premises and implicated conclusions; and weak and strong implicatures.
An obvious question is how implicatures are possible: how can a speaker intentionally imply something that is not part of the linguistic meaning of the phrase she utters, and how can her addressee recover that utterance content? Working out what has been implicated is not a matter of deduction, but of inference to the best explanation. What is to be explained is why the speaker has uttered the words that she did, in the way and in the circumstances that she did.
Grice proposed that rational talk exchanges are cooperative and are therefore governed by a Cooperative Principle (CP) and conversational maxims: hearers can reasonably assume that rational speakers will attempt to cooperate and that rational cooperative speakers will try to make their contribution truthful, informative, relevant and clear, inter alia, and these expectations therefore guide the interpretation of utterances. On his view, since addressees can infer implicatures, speakers can take advantage of their ability, conveying implicatures by exploiting the maxims.
Grice’s theory aimed to show how implicatures could in principle arise. In contrast, work in linguistic pragmatics has attempted to model their actual derivation. Given the need for a cognitively tractable decision procedure, both the neo-Gricean school and work on communication in relevance theory propose a system with fewer principles than Grice’s. Neo-Gricean work attempts to reduce Grice’s array of maxims to just two (Horn) or three (Levinson), while Sperber and Wilson’s relevance theory rejects maxims and the CP and proposes that pragmatic inference hinges on a single communicative principle of relevance.
Conversational implicatures typically have a number of interesting properties, including calculability, cancelability, nondetachability, and indeterminacy. These properties can be used to investigate whether a putative implicature is correctly identified as such, although none of them provides a fail-safe test. A further test, embedding, has also been prominent in work on implicatures.
A number of phenomena that Grice treated as implicatures would now be treated by many as pragmatic enrichment contributing to the proposition expressed. But Grice’s postulation of implicatures was a crucial advance, both for its theoretical unification of apparently diverse types of utterance content and for the attention it drew to pragmatic inference and the division of labor between linguistic semantics and pragmatics in theorizing about verbal communication.
Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases. Four important domains of research within conversation analysis are turn-taking, repair, action formation and ascription, and action sequencing.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Despite its apparent formal simplicity, to define conversion as a word-formation technique is by no means a simple matter, even in respect of one language, let alone languages representing different typological groups or subgroups. The traditional claim that conversion is a derivationally unmarked word-class changing operation involving formally identical (homonymous) lexical items seems largely justifiable so far as English is concerned where this operation is exclusively word/lexeme-based (cf. to swap > (a) swap, clear > to clear). However, while this same claim is also true for Hungarian, a Finno-Ugric language (cf. este
To determine the linguistic nature of conversion and its place among other types of word formation is not a simple matter either, and, paradoxically, it is especially so in the case of the most extensively studied English conversion. The reasons for this to a great extent lie in the fact that practically each element of the traditional definition suggested in the previous paragraph has been called into question, giving rise to a diversity of interpretations of conversion not only in English, but also in a cross-linguistic perspective. Thus, if conversion is viewed as a kind of derivation, the assumptions can be made that being derivationally unmarked means either the presence of a zero formative, or, alternatively, the lack of any overt derivational marking on the converted item (consider for instance the English, Hungarian, German, and Old English examples above). Regardless of their long-debated justifiability, what these assumptions respectively suggest is that conversion after all should be treated either as a kind of derivation, namely zero derivation, or as a self-contained word-formation process different from derivation (affixation). In addition, being derivationally unmarked is also viewed in the corresponding literature as the absence of derivation altogether; and the suggestion is made that during conversion it is in effect the change in the inflectional paradigm that can only signal word-class shift. Because of this, so the argument goes, conversion should be seen as an inflectional and not as a derivational process.
The notion of word class itself and the uncertainties characterizing its understanding present further challenges to morphologists dealing with conversion. Concretely, it is a widely shared view that only the unmarked change of the entire word class can be recognized as conversion (see the examples above). However, there are opinions that insist that the change of a subclass or subcategory also qualifies as conversion, albeit partial or non-prototypical (cf. to run
Finally, treatments of conversion that focus on underlying semantic or conceptual motivations further add to the diversity of views of conversion. These treatments draw on the fact that there is a strong semantic link between the input and the output in the sense that normally the meaning of the latter is semantically derived (predictable) from that of the former. It is argued that this semantic link between the pair words of conversion is based on various types of conceptual, predominantly metonymic shifts whereby extralinguistic entities such as actions, instruments, properties, natural kinds, etc., undergo cognitive reanalyses (cf. instrument as action, property as action, action as actor/place) driven by the communicative needs of interlocutors. Consequently, along with the interpretations mentioned in the previous paragraphs, conversion can also be considered a word-formation process motivated by different types of conceptual shifts between formally identical input and output items.
Compounds are generally divided in those that involve a dependency (subordinate and attributive) relation of one constituent upon the other and those where there is coordination, for which there is much controversy on delimiting the exact borders. This article offers an overview of compounds belonging to the second type, for which the term ‘coordinative’ is adopted, as more general and neutral, drawn from a wide range of terms that have been proposed in the literature. It attempts to provide a definition on the basis of structural and semantic criteria, describes the major features of coordinative compounds and discusses crucial issues that play a significant role to their formation and meaning, such as those of headedness, the order of constituents, and compositionality. Showing that languages vary with respect to the frequency and types of coordinative compounds, being unclear in which way these constructions are distributed and used cross-linguistically, it tries to give a classification with extensive exemplification from genetically and typologically diverse languages.
The term coordination refers to the juxtaposition of two or more conjuncts often linked by a conjunction such as and or or. The conjuncts (e.g., our friend and your teacher in Our friend and your teacher sent greetings) may be words or phrases of any type. They are a defining property of coordination, while the presence or absence of a conjunction depends on the specifics of the particular language. As a general phenomenon, coordination differs from subordination in that the conjuncts are typically symmetric in many ways: they often belong to like syntactic categories, and if nominal, each carries the same case. Additionally, if there is extraction, this must typically be out of all conjuncts in parallel, a phenomenon known as Across-the-Board extraction. Extraction of a single conjunct, or out of a single conjunct, is prohibited by the Coordinate Structure Constraint. Despite this overall symmetry, coordination does sometimes behave in an asymmetric fashion. Under certain circumstances, the conjuncts may be of unlike categories or extraction may occur out of one conjunct, but not another, thus yielding apparent violations of the Coordinate Structure Constraint. In addition, case and agreement show a wide range of complex and sometimes asymmetric behavior cross-linguistically. This tension between the symmetric and asymmetric properties of coordination is one of the reasons that coordination has remained an interesting analytical puzzle for many decades.
Within the general area of coordination, a number of specific sentence types have generated much interest. One is Gapping, in which two sentences are conjoined, but material (often the verb) is missing from the middle of the second conjunct, as in Mary ate beans and John _ potatoes. Another is Right Node Raising, in which shared material from the right edge of sentential conjuncts is placed in the right periphery of the entire sentence, as in The chefs prepared __ and the customers ate __ [a very elaborately constructed dessert]. Finally, some languages have a phenomenon known as comitative coordination, in which a verb has two arguments, one morphologically plural and the other comitative (e.g., with the preposition with), but the plural argument may be understood as singular. English does not have this phenomenon, but if it did, a sentence like We went to the movies with John could be understood as John and I went to the movies.
Marcel den Dikken and Teresa O’Neill
Copular sentences (sentences of the form A is B) have been prominent on the research agenda for linguists and philosophers of language since classical antiquity, and continue to be shrouded in considerable controversy. Central questions in the linguistic literature on copulas and copular sentences are (a) whether predicational, specificational, identificational, and equative copular sentences have a common underlying source; and, if so, (b) how the various surface types of copular sentences are derived from that underlier; (c) whether there is a typology of copulas; and (d) whether copulas are meaningful or meaningless.
The debate surrounding the postulation of multiple copular sentence types relies on criteria related to both meaning and form. Analyses based on meaning tend to focus on the question of whether or not one of the terms is a predicate of the other, whether or not the copula contributes meaning, and the information-structural properties of the construction. Analyses based on form focus on the flexibility of the linear ordering of the two terms of the construction, the surface distribution of the copular element, the restrictions imposed on the extraction of the two terms, the case and agreement properties of the construction, the omissibility of the copula or one of the two terms, and the connectivity effects exhibited by the construction.
Morphosyntactic variation in the domain of copular elements is an area of research with fruitful intersections between typological and generative approaches. A variety of criteria are presented in the literature to justify the postulation of multiple copulas or underlying representations for copular sentences. Another prolific body of research concerns the semantics of copular sentences. In the assessment of scholarship on copulas and copular sentences, the article critiques the ‘multiple copulas’ approach and examines ways in which the surface variety of copular sentence types can be accounted for in a ‘single copula’ analysis. The analysis of copular constructions continues to have far-reaching consequences in the context of linguistic theory construction, particularly the question of how a predicate combines with its subject in syntactic structure.
Corpus Phonology is an approach to phonology that places corpora at the center of phonological research. Some practitioners of corpus phonology see corpora as the only object of investigation; others use corpora alongside other available techniques (for instance, intuitions, psycholinguistic and neurolinguistic experimentation, laboratory phonology, the study of the acquisition of phonology or of language pathology, etc.). Whatever version of corpus phonology one advocates, corpora have become part and parcel of the modern research environment, and their construction and exploitation has been modified by the multidisciplinary advances made within various fields. Indeed, for the study of spoken usage, the term ‘corpus’ should nowadays only be applied to bodies of data meeting certain technical requirements, even if corpora of spoken usage are by no means new and coincide with the birth of recording techniques. It is therefore essential to understand what criteria must be met by a modern corpus (quality of recordings, diversity of speech situations, ethical guidelines, time-alignment with transcriptions and annotations, etc.) and what tools are available to researchers. Once these requirements are met, the way is open to varying and possibly conflicting uses of spoken corpora by phonological practitioners. A traditional stance in theoretical phonology sees the data as a degenerate version of a more abstract underlying system, but more and more researchers within various frameworks (e.g., usage-based approaches, exemplar models, stochastic Optimality Theory, sociophonetics) are constructing models that tightly bind phonological competence to language use, rely heavily on quantitative information, and attempt to account for intra-speaker and inter-speaker variation. This renders corpora essential to phonological research and not a mere adjunct to the phonological description of the languages of the world.
Creole languages have a curious status in linguistics, and at the same time they often have very low prestige in the societies in which they are spoken. These two facts may be related, in part because they circle around notions such as “derived from” or “simplified” instead of “original.” Rather than simply taking the notion of “creole” as a given and trying to account for its properties and origin, this essay tries to explore the ways scholars have dealt with creoles. This involves, in particular, trying to see whether we can define “creoles” as a meaningful class of languages. There is a canonical list of languages that most specialists would not hesitate to call creoles, but the boundaries of the list and the criteria for being listed are vague. It also becomes difficult to distinguish sharply between pidgins and creoles, and likewise the boundaries between some languages claimed to be creoles and their lexifiers are rather vague.
Several possible criteria to distinguish creoles will be discussed. Simply defining them as languages of which we know the point of birth may be a necessary, but not sufficient, criterion. Displacement is also an important criterion, necessary but not sufficient. Mixture is often characteristic of creoles, but not crucial, it is argued. Essential in any case is substantial restructuring of some lexifier language, which may take the form of morphosyntactic simplification, but it is dangerous to assume that simplification always has the same outcome. The combination of these criteria—time of genesis, displacement, mixture, restructuring—contributes to the status of a language as creole, but “creole” is far from a unified notion. There turn out to be several types of creoles, and then a whole bunch of creole-like languages, and they differ in the way these criteria are combined with respect to them.
Thus the proposal is made here to stop looking at creoles as a separate class, but take them as special cases of the general phenomenon that the way languages emerge and are used to a considerable extent determines their properties. This calls for a new, socially informed typology of languages, which will involve all kinds of different types of languages, including pidgins and creoles.
Cyclicity in syntax constitutes a property of derivations in which syntactic operations apply bottom-up in the production of ever larger constituents. The formulation of a principle of grammar that guarantees cyclicity depends on whether structure is built top-down with phrase structure rules or bottom-up with a transformation Merge. Considerations of minimal and efficient computation motivate the latter, as well as the formulation of the cyclic principle as a No Tampering Condition on structure-building operations (Section 3.3) without any reference to special cyclic domains in which operations apply (as in the formulation of the Strict Cycle Condition (Section 2) and its predecessors (Section 1)) or any reference to extending a phrase marker (the Extension Condition (Section 3)). Ultimately, the empirical effects of a No Tampering Condition on structure building, which conform to strict cyclicity, follow from the formulation of the Merge operation as strictly binary. This leaves as open questions whether displacement (movement) must involve covert intermediate steps (successive cyclic movement) and whether derivations of the two separate interface representations (Phonetic Form and Logical Form) occur in parallel as a single cycle.
Morphological defectiveness refers to situations where one or more paradigmatic forms of a lexeme are not realized, without plausible syntactic, semantic, or phonological causes. The phenomenon tends to be associated with low-frequency lexemes and loanwords. Typically, defectiveness is gradient, lexeme-specific, and sensitive to the internal structure of paradigms.
The existence of defectiveness is a challenge to acquisition models and morphological theories where there are elsewhere operations to materialize items. For this reason, defectiveness has become a rich field of research in recent years, with distinct approaches that view it as an item-specific idiosyncrasy, as an epiphenomenal result of rule competition, or as a normal morphological alternation within a paradigmatic space.
William F. Hanks
Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus,
(1)A:Oh, there’sthat guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy)
(2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer)
In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?
Dene-Yeniseian is a proposed genealogical link between the widespread North American language family Na-Dene (Athabaskan, Eyak, Tlingit) and Yeniseian in central Siberia, represented today by the critically endangered Ket and several documented extinct relatives. The Dene-Yeniseian hypothesis is an old idea, but since 2006 new evidence supporting it has been published in the form of shared morphological systems and a modest number of lexical cognates showing interlocking sound correspondences. Recent data from human genetics and folklore studies also increasingly indicate the plausibility of a prehistoric (probably Late Pleistocene) connection between populations in northwestern North America and the traditionally Yeniseian-speaking areas of south-central Siberia. At present, Dene-Yeniseian cannot be accepted as a proven language family until the purported evidence supporting the lexical and morphological correspondences between Yeniseian and Na-Dene is expanded and tested by further critical analysis and their relationship to Old World families such as Sino-Tibetan and Caucasian, as well as the isolate Burushaski (all earlier proposed as relatives of Yeniseian, and sometimes also of Na-Dene), becomes clearer.
Denominal verbs are verbs formed from nouns by means of various word-formation processes such as derivation, conversion, or less common mechanisms like reduplication, change of pitch, or root and pattern. Because their well-formedness is determined by morphosyntactic, phonological, and semantic constraints, they have been analyzed from a variety of lexicalist and non-lexicalist perspectives, including Optimality Theory, Lexical Semantics, Cognitive Grammar, Onomasiology, and Neo-Construction Grammar. Independently of their structural shape, denominal verbs have in common that they denote events in which the referents of their base nouns (e.g., computer in the case of computerize) participate in a non-arbitrary way. While traditional labels like ‘ornative’, ‘privative’, ‘locative’, ‘instrumental’ and the like allow for a preliminary classification of denominal verbs, a more formal description has to account for at least three basic aspects, namely (1) competition among functionally similar word-formation patterns, (2) the polysemy of affixes, which precludes a neat one-to-one relation between derivatives displaying a particular affix and a particular semantic class, and (3) the relevance of generic knowledge and contextual information for the interpretation of (innovative) denominal verbs.
Željko Bošković and Troy Messick
Economy considerations have always played an important role in the generative theory of grammar. They are particularly prominent in the most recent instantiation of this approach, the Minimalist Program, which explores the possibility that Universal Grammar is an optimal way of satisfying requirements that are imposed on the language faculty by the external systems that interface with the language faculty which is also characterized by optimal, computationally efficient design. In this respect, the operations of the computational system that produce linguistic expressions must be optimal in that they must satisfy general considerations of simplicity and efficient design. Simply put, the guiding principles here are (a) do something only if you need to and (b) if you do need to, do it in the most economical/efficient way. These considerations ban superfluous steps in derivations and superfluous symbols in representations. Under economy guidelines, movement takes place only when there is a need for it (with both syntactic and semantic considerations playing a role here), and when it does take place, it takes place in the most economical way: it is as short as possible and carries as little material as possible. Furthermore, economy is evaluated locally, on the basis of immediately available structure. The locality of syntactic dependencies is also enforced by minimal search and by limiting the number of syntactic objects and the amount of structure accessible in the derivation. This is achieved by transferring parts of syntactic structure to the interfaces during the derivation, the transferred parts not being accessible for further syntactic operations.
Derivational morphology is a type of word formation that creates new lexemes, either by changing syntactic category or by adding substantial new meaning (or both) to a free or bound base. Derivation may be contrasted with inflection on the one hand or with compounding on the other. The distinctions between derivation and inflection and between derivation and compounding, however, are not always clear-cut. New words may be derived by a variety of formal means including affixation, reduplication, internal modification of various sorts, subtraction, and conversion. Affixation is best attested cross-linguistically, especially prefixation and suffixation. Reduplication is also widely found, with various internal changes like ablaut and root and pattern derivation less common. Derived words may fit into a number of semantic categories. For nouns, event and result, personal and participant, collective and abstract noun are frequent. For verbs, causative and applicative categories are well-attested, as are relational and qualitative derivations for adjectives. Languages frequently also have ways of deriving negatives, relational words, and evaluatives. Most languages have derivation of some sort, although there are languages that rely more heavily on compounding than on derivation to build their lexical stock. A number of topics have dominated the theoretical literature on derivation, including productivity (the extent to which new words can be created with a given affix or morphological process), the principles that determine the ordering of affixes, and the place of derivational morphology with respect to other components of the grammar. The study of derivation has also been important in a number of psycholinguistic debates concerning the perception and production of language.