91-100 of 101 Results  for:

Clear all

Article

Meanings of Constructions  

Laura A. Michaelis

Meanings are assembled in various ways in a construction-based grammar, and this array can be represented as a continuum of idiomaticity, a gradient of lexical fixity. Constructional meanings are the meanings to be discovered at every point along the idiomaticity continuum. At the leftmost, or ‘fixed,’ extreme of this continuum are frozen idioms, like the salt of the earth and in the know. The set of frozen idioms includes those with idiosyncratic syntactic properties, like the fixed expression by and large (an exceptional pattern of coordination in which a preposition and adjective are conjoined). Other frozen idioms, like the unexceptionable modified noun red herring, feature syntax found elsewhere. At the rightmost, or ‘open’ end of this continuum are fully productive patterns, including the rule that licenses the string Kim blinked, known as the Subject-Predicate construction. Between these two poles are (a) lexically fixed idiomatic expressions, verb-headed and otherwise, with regular inflection, such as chew/chews/chewed the fat; (b) flexible expressions with invariant lexical fillers, including phrasal idioms like spill the beans and the Correlative Conditional, such as the more, the merrier; and (c) specialized syntactic patterns without lexical fillers, like the Conjunctive Conditional (e.g., One more remark like that and you’re out of here). Construction Grammar represents this range of expressions in a uniform way: whether phrasal or lexical, all are modeled as feature structures that specify phonological and morphological structure, meaning, use conditions, and relevant syntactic information (including syntactic category and combinatoric potential).

Article

Natural Language Ontology  

Friederike Moltmann

Natural language ontology is a branch of both metaphysics and linguistic semantics. Its aim is to uncover the ontological categories, notions, and structures that are implicit in the use of natural language, that is, the ontology that a speaker accepts when using a language. Natural language ontology is part of “descriptive metaphysics,” to use Strawson’s term, or “naive metaphysics,” to use Fine’s term, that is, the metaphysics of appearances as opposed to foundational metaphysics, whose interest is in what there really is. What sorts of entities natural language involves is closely linked to compositional semantics, namely what the contribution of occurrences of expressions in a sentence is taken to be. Most importantly, entities play a role as semantic values of referential terms, but also as implicit arguments of predicates and as parameters of evaluation. Natural language appears to involve a particularly rich ontology of abstract, minor, derivative, and merely intentional objects, an ontology many philosophers are not willing to accept. At the same time, a serious investigation of the linguistic facts often reveals that natural language does not in fact involve the sort of ontology that philosophers had assumed it does. Natural language ontology is concerned not only with the categories of entities that natural language commits itself to, but also with various metaphysical notions, for example the relation of part-whole, causation, material constitution, notions of existence, plurality and unity, and the mass-count distinction. An important question regarding natural language ontology is what linguistic data it should take into account. Looking at the sorts of data that researchers who practice natural language ontology have in fact taken into account makes clear that it is only presuppositions, not assertions, that reflect the ontology implicit in natural language. The ontology of language may be distinctive in that it may in part be driven specifically by language or the use of it in a discourse. Examples are pleonastic entities, discourse referents conceived of as entities of a sort, and an information-based notion of part structure involved in the semantics of plurals and mass nouns. Finally, there is the question of the universality of the ontology of natural language. Certainly, the same sort of reasoning should apply to consider it universal, in a suitable sense, as has been applied for the case of (generative) syntax.

Article

Number in Language  

Paolo Acquaviva

Number is the category through which languages express information about the individuality, numerosity, and part structure of what we speak about. As a linguistic category it has a morphological, a morphosyntactic, and a semantic dimension, which are variously interrelated across language systems. Number marking can apply to a more or less restricted part of the lexicon of a language, being most likely on personal pronouns and human/animate nouns, and least on inanimate nouns. In the core contrast, number allows languages to refer to ‘many’ through the description of ‘one’; the sets referred to consist of tokens of the same type, but also of similar types, or of elements pragmatically associated with one named individual. In other cases, number opposes a reading of ‘one’ to a reading as ‘not one,’ which includes masses; when the ‘one’ reading is morphologically derived from the ‘not one,’ it is called a singulative. It is rare for a language to have no linguistic number at all, since a ‘one–many’ opposition is typically implied at least in pronouns, where the category of person discriminates the speaker as ‘one.’ Beyond pronouns, number is typically a property of nouns and/or determiners, although it can appear on other word classes by agreement. Verbs can also express part-structural properties of events, but this ‘verbal number’ is not isomorphic to nominal number marking. Many languages allow a variable proportion of their nominals to appear in a ‘general’ form, which expresses no number information. The main values of number-marked elements are singular and plural; dual and a much rarer trial also exist. Many languages also distinguish forms interpreted as paucals or as greater plurals, respectively, for small and usually cohesive groups and for generically large ones. A broad range of exponence patterns can express these contrasts, depending on the morphological profile of a language, from word inflections to freestanding or clitic forms; certain choices of classifiers also express readings that can be described as ‘plural,’ at least in certain interpretations. Classifiers can co-occur with other plurality markers, but not when these are obligatory as expressions of an inflectional paradigm, although this is debated, partly because the notion of classifier itself subsumes distinct phenomena. Many languages, especially those with classifiers, encode number not as an inflectional category, but through word-formation operations that express readings associated with plurality, including large size. Current research on number concerns all its morphological, morphosyntactic, and semantic dimensions, in particular the interrelations of them as part of the study of natural language typology and of the formal analysis of nominal phrases. The grammatical and semantic function of number and plurality are particularly prominent in formal semantics and in syntactic theory.

Article

Scope Marking at the Syntax-Semantics Interface  

Veneeta Dayal and Deepak Alok

Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…t­j­…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.

Article

Semantic Change  

Elizabeth Closs Traugott

Traditional approaches to semantic change typically focus on outcomes of meaning change and list types of change such as metaphoric and metonymic extension, broadening and narrowing, and the development of positive and negative meanings. Examples are usually considered out of context, and are lexical members of nominal and adjectival word classes. However, language is a communicative activity that is highly dependent on context, whether that of the ongoing discourse or of social and ideological changes. Much recent work on semantic change has focused, not on results of change, but on pragmatic enabling factors for change in the flow of speech. Attention has been paid to the contributions of cognitive processes, such as analogical thinking, production of cues as to how a message is to be interpreted, and perception or interpretation of meaning, especially in grammaticalization. Mechanisms of change such as metaphorization, metonymization, and subjectification have been among topics of special interest and debate. The work has been enabled by the fine-grained approach to contextual data that electronic corpora allow.

Article

Textual Inference  

Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.

Article

Theme  

Eva Hajičová

In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.

Article

Connectionism in Linguistic Theory  

Xiaowei Zhao

Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.

Article

Lexical Semantics  

Dirk Geeraerts

Lexical semantics is the study of word meaning. Descriptively speaking, the main topics studied within lexical semantics involve either the internal semantic structure of words, or the semantic relations that occur within the vocabulary. Within the first set, major phenomena include polysemy (in contrast with vagueness), metonymy, metaphor, and prototypicality. Within the second set, dominant topics include lexical fields, lexical relations, conceptual metaphor and metonymy, and frames. Theoretically speaking, the main theoretical approaches that have succeeded each other in the history of lexical semantics are prestructuralist historical semantics, structuralist semantics, and cognitive semantics. These theoretical frameworks differ as to whether they take a system-oriented rather than a usage-oriented approach to word-meaning research but, at the same time, in the historical development of the discipline, they have each contributed significantly to the descriptive and conceptual apparatus of lexical semantics.

Article

Generative Grammar  

Knut Tarald Taraldsen

This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 [1970], 373–398).