81-100 of 556 Results

Article

The Compositional Semantics of Modification  

Sebastian Bücking

Modification is a combinatorial semantic operation between a modifier and a modifiee. Take, for example, vegetarian soup: the attributive adjective vegetarian modifies the nominal modifiee soup and thus constrains the range of potential referents of the complex expression to soups that are vegetarian. Similarly, in Ben is preparing a soup in the camper, the adverbial in the camper modifies the preparation by locating it. Notably, modifiers can have fairly drastic effects; in fake stove, the attribute fake induces that the complex expression singles out objects that seem to be stoves, but are not. Intuitively, modifiers contribute additional information that is not explicitly called for by the target the modifier relates to. Speaking in terms of logic, this roughly says that modification is an endotypical operation; that is, it does not change the arity, or logical type, of the modified target constituent. Speaking in terms of syntax, this predicts that modifiers are typically adjuncts and thus do not change the syntactic distribution of their respective target; therefore, modifiers can be easily iterated (see, for instance, spicy vegetarian soup or Ben prepared a soup in the camper yesterday). This initial characterization sets modification apart from other combinatorial operations such as argument satisfaction and quantification: combining a soup with prepare satisfies an argument slot of the verbal head and thus reduces its arity (see, for instance, *prepare a soup a quiche). Quantification as, for example, in the combination of the quantifier every with the noun soup, maps a nominal property onto a quantifying expression with a different distribution (see, for instance, *a every soup). Their comparatively loose connection to their hosts renders modifiers a flexible, though certainly not random, means within combinatorial meaning constitution. The foundational question is how to work their being endotypical into a full-fledged compositional analysis. On the one hand, modifiers can be considered endotypical functors by virtue of their lexical endowment; for instance, vegetarian would be born a higher-ordered function from predicates to predicates. On the other hand, modification can be considered a rule-based operation; for instance, vegetarian would denote a simple predicate from entities to truth-values that receives its modifying endotypical function only by virtue of a separate modification rule. In order to assess this and related controversies empirically, research on modification pays particular attention to interface questions such as the following: how do structural conditions and the modifying function conspire in establishing complex interpretations? What roles do ontological information and fine-grained conceptual knowledge play in the course of concept combination?

Article

Compound and Complex Predicates in Japanese  

Taro Kageyama

Compound and complex predicates—predicates that consist of two or more lexical items and function as the predicate of a single sentence—present an important class of linguistic objects that pertain to an enormously wide range of issues in the interactions of morphology, phonology, syntax, and semantics. Japanese makes extensive use of compounding to expand a single verb into a complex one. These compounding processes range over multiple modules of the grammatical system, thus straddling the borders between morphology, syntax, phonology, and semantics. In terms of degree of phonological integration, two types of compound predicates can be distinguished. In the first type, called tight compound predicates, two elements from the native lexical stratum are tightly fused and inflect as a whole for tense. In this group, Verb-Verb compound verbs such as arai-nagasu [wash-let.flow] ‘to wash away’ and hare-agaru [sky.be.clear-go.up] ‘for the sky to clear up entirely’ are preponderant in numbers and productivity over Noun-Verb compound verbs such as tema-doru [time-take] ‘to take a lot of time (to finish).’ The second type, called loose compound predicates, takes the form of “Noun + Predicate (Verbal Noun [VN] or Adjectival Noun [AN]),” as in post-syntactic compounds like [sinsya : koonyuu] no okyakusama ([new.car : purchase] GEN customers) ‘customer(s) who purchase(d) a new car,’ where the symbol “:” stands for a short phonological break. Remarkably, loose compounding allows combinations of a transitive VN with its agent subject (external argument), as in [Supirubaagu : seisaku] no eiga ([Spielberg : produce] GEN film) ‘a film/films that Spielberg produces/produced’—a pattern that is illegitimate in tight compounds and has in fact been considered universally impossible in the world’s languages in verbal compounding and noun incorporation. In addition to a huge variety of tight and loose compound predicates, Japanese has an additional class of syntactic constructions that as a whole function as complex predicates. Typical examples are the light verb construction, where a clause headed by a VN is followed by the light verb suru ‘do,’ as in Tomodati wa sinsya o koonyuu (sae) sita [friend TOP new.car ACC purchase (even) did] ‘My friend (even) bought a new car’ and the human physical attribute construction, as in Sensei wa aoi me o site-iru [teacher TOP blue eye ACC do-ing] ‘My teacher has blue eyes.’ In these constructions, the nominal phrases immediately preceding the verb suru are semantically characterized as indefinite and non-referential and reject syntactic operations such as movement and deletion. The semantic indefiniteness and syntactic immobility of the NPs involved are also observed with a construction composed of a human subject and the verb aru ‘be,’ as Gakkai ni wa oozei no sankasya ga atta ‘There was a large number of participants at the conference.’ The constellation of such “word-like” properties shared by these compound and complex predicates poses challenging problems for current theories of morphology-syntax-semantics interactions with regard to such topics as lexical integrity, morphological compounding, syntactic incorporation, semantic incorporation, pseudo-incorporation, and indefinite/non-referential NPs.

Article

Compounding and Linking Elements in Germanic  

Barbara Schlücker

Compounding is a frequent and productive word-formation pattern in all Germanic languages. It is a pattern that links an overtly simple grammatical form to a rich semantic-conceptual structure. Overall, there are rather few restrictions on the formation of compounds, and units of various word classes can serve as constituents in compounds. Both determinative and coordinative compounds exist across Germanic. Nominal compounding is the largest and most productive class in all Germanic languages, in particular noun–noun compounding, followed by adjectival compounding. Verbal compounding, on the other hand, is much more restrained, in particular in West Germanic, whereas it is more common in North Germanic. Linking elements are a typical but not necessary property of Germanic compounds. They occur mainly in noun–noun compounds. The inventory and use of linking elements show differences between the West Germanic languages, on one hand, and the North Germanic languages, on the other hand. Regarding the distribution and use of linking-s, however, there are many similarities between the Germanic languages. Notwithstanding the similarities described here, there are also many differences between the various Germanic compound patterns. These global and specific characteristics are the central subject of the article, taking into account data from German, Luxemburgish, Dutch, West Frisian, English, Afrikaans, and Yiddish (West Germanic) and from Danish, Swedish, Norwegian, Icelandic, and Faroese (North Germanic).

Article

Compounding: From Latin to Romance  

Franz Rainer

Compounding in the narrow sense of the term, that is, leaving aside so-called syntagmatic compounds like pomme de terre ‘potato’, is a process of word formation that creates new lexemes by combining more than one lexeme according to principles different from those of syntax. New lexemes created according to ordinary syntactic principles are by some called syntagmatic compounds, also juxtapositions in the Romance tradition since Darmesteter. In a diachronically oriented article such as this one, it is convenient to take into consideration both types of compounding, since most patterns of compounding in Romance have syntactic origins. This syntactic origin is responsible for the fact that the boundaries between compounding and syntax continue to be fuzzy in modern Romance varieties, the precise delimitation being very much theory-dependent (for a discussion based on Portuguese, cf. Rio-Torto & Ribeiro, 2009). Whether some Latin patterns of compounding might, after all, have come down to the Romance languages through the popular channel of transmission continues to be controversial. There can be no doubt, however, that most of them were doomed.

Article

Compounding in Morphology  

Pius ten Hacken

Compounding is a word formation process based on the combination of lexical elements (words or stems). In the theoretical literature, compounding is discussed controversially, and the disagreement also concerns basic issues. In the study of compounding, the questions guiding research can be grouped into four main areas, labeled here as delimitation, classification, formation, and interpretation. Depending on the perspective taken in the research, some of these may be highlighted or backgrounded. In the delimitation of compounding, one question is how important it is to be able to determine for each expression unambiguously whether it is a compound or not. Compounding borders on syntax and on affixation. In some theoretical frameworks, it is not a problem to have more typical and less typical instances, without a precise boundary between them. However, if, for instance, word formation and syntax are strictly separated and compounding is in word formation, it is crucial to draw this borderline precisely. Another question is which types of criteria should be used to distinguish compounding from other phenomena. Criteria based on form, on syntactic properties, and on meaning have been used. In all cases, it is also controversial whether such criteria should be applied crosslinguistically. In the classification of compounds, the question of how important the distinction between the classes is for the theory in which they are used poses itself in much the same way as the corresponding question for the delimitation. A common classification uses headedness as a basis. Other criteria are based on the forms of the elements that are combined (e.g., stem vs. word) or on the semantic relationship between the components. Again, whether these criteria can and should be applied crosslinguistically is controversial. The issue of the formation rules for compounds is particularly prominent in frameworks that emphasize form-based properties of compounding. Rewrite rules for compounding have been proposed, generalizations over the selection of the input form (stem or word) and of linking elements, and rules for stress assignment. Compounds are generally thought of as consisting of two components, although these components may consist of more than one element themselves. For some types of compounds with three or more components, for example copulative compounds, a nonbinary structure has been proposed. The question of interpretation can be approached from two opposite perspectives. In a semasiological perspective, the meaning of a compound emerges from the interpretation of a given form. In an onomasiological perspective, the meaning precedes the formation in the sense that a form is selected to name a particular concept. The central question in the interpretation of compounds is how to determine the relationship between the two components. The range of possible interpretations can be constrained by the rules of compounding, by the semantics of the components, and by the context of use. A much-debated question concerns the relative importance of these factors.

Article

Computational Approaches to Morphology  

Emmanuel Keuleers

Computational psycholinguistics has a long history of investigation and modeling of morphological phenomena. Several computational models have been developed to deal with the processing and production of morphologically complex forms and with the relation between linguistic morphology and psychological word representations. Historically, most of this work has focused on modeling the production of inflected word forms, leading to the development of models based on connectionist principles and other data-driven models such as Memory-Based Language Processing (MBLP), Analogical Modeling of Language (AM), and Minimal Generalization Learning (MGL). In the context of inflectional morphology, these computational approaches have played an important role in the debate between single and dual mechanism theories of cognition. Taking a different angle, computational models based on distributional semantics have been proposed to account for several phenomena in morphological processing and composition. Finally, although several computational models of reading have been developed in psycholinguistics, none of them have satisfactorily addressed the recognition and reading aloud of morphologically complex forms.

Article

Computational Models of Morphological Learning  

Jordan Kodner

A computational learner needs three things: Data to learn from, a class of representations to acquire, and a way to get from one to the other. Language acquisition is a very particular learning setting that can be defined in terms of the input (the child’s early linguistic experience) and the output (a grammar capable of generating a language very similar to the input). The input is infamously impoverished. As it relates to morphology, the vast majority of potential forms are never attested in the input, and those that are attested follow an extremely skewed frequency distribution. Learners nevertheless manage to acquire most details of their native morphologies after only a few years of input. That said, acquisition is not instantaneous nor is it error-free. Children do make mistakes, and they do so in predictable ways which provide insights into their grammars and learning processes. The most elucidating computational model of morphology learning from the perspective of a linguist is one that learns morphology like a child does, that is, on child-like input and along a child-like developmental path. This article focuses on clarifying those aspects of morphology acquisition that should go into such an elucidating a computational model. Section 1 describes the input with a focus on child-directed speech corpora and input sparsity. Section 2 discusses representations with focuses on productivity, developmental paths, and formal learnability. Section 3 surveys the range of learning tasks that guide research in computational linguistics and NLP with special focus on how they relate to the acquisition setting. The conclusion in Section 4 presents a summary of morphology acquisition as a learning problem with Table 4 highlighting the key takeaways of this article.

Article

Computational Phonology  

Jane Chandlee and Jeffrey Heinz

Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.

Article

Computational Semantics  

Katrin Erk

Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data. The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique. A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem. Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers. Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data. It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?

Article

Computer-Based Tools for Word and Paradigm Computational Morphology  

Raphael Finkel

The Word and Paradigm approach to morphology associates lexemes with tables of surface forms for different morphosyntactic property sets. Researchers express their realizational theories, which show how to derive these surface forms, using formalisms such as Network Morphology and Paradigm Function Morphology. The tables of surface forms also lend themselves to a study of the implicative theories, which infer the realizations in some cells of the inflectional system from the realizations of other cells. There is an art to building realizational theories. First, the theories should be correct, that is, they should generate the right surface forms. Second, they should be elegant, which is much harder to capture, but includes the desiderata of simplicity and expressiveness. Without software to test a realizational theory, it is easy to sacrifice correctness for elegance. Therefore, software that takes a realizational theory and generates surface forms is an essential part of any theorist’s toolbox. Discovering implicative rules that connect the cells in an inflectional system is often quite difficult. Some rules are immediately apparent, but others can be subtle. Software that automatically analyzes an entire table of surface forms for many lexemes can help automate the discovery process. Researchers can use Web-based computerized tools to test their realizational theories and to discover implicative rules.

Article

Conjugation Class  

Isabel Oltra-Massuet

Conjugation classes have been defined as the set of all forms of a verb that spell out all possible morphosyntactic categories of person, number, tense, aspect, mood, and/or other additional categories that the language expresses in verbs. Theme vowels instantiate conjugation classes as purely morphological markers; that is, they determine the verb’s morphophonological surface shape but not its syntactic or semantic properties. They typically split the vocabulary items of the category verb into groups that spellout morphosyntactic and morphosemantic feature specifications with the same inflectional affixes. The bond between verbs and their conjugational marking is idiosyncratic, and cannot be established on semantic, syntactic, or phonological grounds, although there have been serious attempts at finding a systematic correlation. The existence of theme vowels and arbitrary conjugation classes has been taken by lexicalist theories as empirical evidence to argue against syntactic approaches to word formation and are used as one of the main arguments for the autonomy of morphology. They further raise questions on the nature of basic morphological notions such as stems or paradigms and serve as a good empirical ground for theories of allomorphy and syncretism, or to test psycholinguistic and neurolinguistic theories of productivity, full decomposition, and storage. Conjugations and their instantiation via theme vowels may also be a challenge for theories of first language acquisition and the learning of morphological categories devoid of any semantic meaning or syntactic alignment that extend to second language acquisition as well. Thus, analyzing their nature, their representation, and their place in grammar is crucial as the approach to these units can have profound effects on linguistic theory and the architecture of grammar.

Article

Connectionism in Linguistic Theory  

Xiaowei Zhao

Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.

Article

Consonant Harmony  

Gunnar Hansson

The term consonant harmony refers to a class of systematic sound patterns, in which consonants interact in some assimilatory way even though they are not adjacent to each other in the word. Such long-distance assimilation can sometimes hold across a significant stretch of intervening vowels and consonants, such as in Samala (Ineseño Chumash) /s-am-net-in-waʃ/ → [ʃamnetiniwaʃ] “they did it to you”, where the alveolar sibilant /s‑/ of the 3.sbj prefix assimilates to the postalveolar sibilant /ʃ/ of the past suffix /‑waʃ/ across several intervening syllables that contain a variety of non-sibilant consonants. While consonant harmony most frequently involves coronal-specific contrasts, like in the Samala case, there are numerous cases of assimilation in other phonological properties, such as laryngeal features, nasality, secondary articulation, and even constriction degree. Not all cases of consonant harmony result in overt alternations, like the [s] ∼ [ʃ] alternation in the Samala 3.sbj prefix. Sometimes the harmony is merely a phonotactic restriction on the shape of morphemes (roots) within the lexicon. Consonant harmony tends to implicate only some group (natural class) of consonants that already share a number of features, and are hence relatively similar, while ignoring less similar consonants. The distance between the potentially interacting consonants can also play a role. For example, in many cases assimilation is limited to relatively short-distance ‘transvocalic’ contexts (. . . CVC. . . ), though the interpretation of such locality restrictions remains a matter of debate. Consonants that do not directly participate in the harmony (as triggers or undergoers of assimilation) are typically neutral and transparent, allowing the assimilating property to be propagated across them. However, this is not universally true; in recent years several cases have come to light in which certain segments can act as blockers when they intervene between a potential trigger-target pair. The main significance of consonant harmony for linguistic theory lies in its apparently non-local character and the challenges that this poses for theories of phonological representations and processes, as well as for formal models of phonological learning. Along with other types of long-distance dependencies in segmental phonology (e.g., long-distance dissimilation, and vowel harmony systems with one or more transparent vowels), sound patterns of consonant harmony have contributed to the development of many theoretical constructs, such as autosegmental (nonlinear) representations, feature geometry, underspecification, feature spreading, strict locality (vs. ‘gapped’ representations), parametrized visibility, agreement constraints, and surface correspondence relations. The formal analysis of long-distance assimilation (and dissimilation) remains a rich and vibrant area of theoretical research. The empirical base for such theoretical inquiry also continues to be expanded. On the one hand, previously undocumented cases (or new, surprising details of known cases) continue to be added to the corpus of attested consonant harmony patterns. On the other hand, artificial phonology learning experiments allow the properties of typologically rare or unattested patterns to be explored in a controlled laboratory setting.

Article

Construction-Based Research in China  

Xu Yang and Randy J. Lapolla

Research on construction-based grammar in China began in the late 1990s. Since its initial stages of introduction and preliminary exploration, it has entered a stage of productive and innovative development. In the past two decades, Chinese construction grammarians have achieved a number of valuable research results. In terms of theoretical applications, they have described and explained various types of constructions, such as schematic, partly variable, and fully substantive constructions. They have also applied the constructionist approach to the teaching of Chinese as a second language, proposing some new grammar systems or teaching modes such as the construction-chunk approach (构式-语块教学法), the lexicon-construction interaction model (词汇-构式互动体系), and trinitarian grammar (三一语法). In terms of theoretical innovation, Chinese construction grammarians have put forward theories or hypotheses such as the unification of grammar and rhetoric through constructions, the concept of lexical coercion, and interactive construction grammar (互动构式语法). However, some problems have also emerged in the field of construction grammar approaches. These include a narrow understanding of the concept of construction, a limited range of research topics, and a narrow range of disciplinary perspectives and methods. To ensure the long-term development of construction-based research in China, scholars should be encouraged to make the following changes: First, they should adopt a usage-based approach using natural data, and they should keep up with advances in the study of construction networks. Second, they should broaden the scope of construction-based research and integrate it with language typology and historical linguistics. Finally, they should integrate cross-disciplinary and interdisciplinary research findings and methods. In this way, construction-based research in China can continue to flourish and make significant contributions to the study of grammar and language.

Article

Construction Morphology  

Geert Booij

Construction Morphology is a theory of word structure in which the complex words of a language are analyzed as constructions, that is, systematic pairings of form and meaning. These pairings are analyzed within a Tripartite Parallel Architecture conception of grammar. This presupposes a word-based approach to the analysis of morphological structure and a strong dependence on paradigmatic relations between words. The lexicon contains both words and the constructional schemas they are instantiations of. Words and schemas are organized in a hierarchical network, with intermediate layers of subschemas. These schemas have a motivating function with respect to existing complex words and specify how new complex words can be formed. The consequence of this view of morphology is that there is no sharp boundary between lexicon and grammar. In addition, the use of morphological patterns may also depend on specific syntactic constructions (construction-dependent morphology). This theory of lexical relatedness also provides insight into language change such as the use of obsolete case markers as markers of specific constructions, the change of words into affixes, and the debonding of word constituents into independent words. Studies of language acquisition and word processing confirm this view of the lexicon and the nature of lexical knowledge. Construction Morphology is also well equipped for dealing with inflection and the relationships between the cells of inflectional paradigms, because it can express how morphological schemas are related paradigmatically.

Article

Contact Between Spanish and Portuguese in South America  

Ana M. Carvalho

Spanish and Portuguese are in contact along the extensive border of Brazil and its neighboring Spanish-speaking countries. Transnational interactions in some border communities allow for ephemeral language accommodations that occur when speakers of both languages communicate during social interactions and business transactions, facilitated by the lack of border control and similarities between the languages. A different situation is found in northern Uruguay, where Spanish and Portuguese are spoken in several border towns, presenting a case of stable and prolonged bilingualism that has allowed for the emergence of language contact phenomena such as lexical borrowings, code-switching, and structural convergence to a variable extent. However, due to urbanization and the presence of monolingual dialects in the surrounding communities, Portuguese and Spanish have not converged structurally in a single mixed code in urban areas and present instead clear continuities with the monolingual counterparts.

Article

The Contact History of English  

Marcelle Cole and Stephen Laker

Contact between Early English and British Celtic, Latin, Norse, and French came about through a myriad of historical, political, and sociocultural developments: invasion, foreign governance, and the spread of Christianity, but also via peaceful coexistence, intermarriage, cultural exchange, and trade. The so-called Anglo-Saxon settlement of Britain brought speakers of an emerging insular West Germanic variety, which became known as Englisc, into contact with British Celtic and, to some extent, Latin speakers. The Northumbrian historian Bede painted a vivid picture of 8th-century multilingual Britain as an island comprising “five nations, the English, Britons, Scots, Picts and Latins, each in its own peculiar dialect, cultivating the sublime study of divine truth.” The Christianization of the Anglo-Saxons led to renewed contact with Latin, the lingua franca of Christendom. The Church became an important conduit for Latin-derived lexis related to learning and ecclesiastical ritual and organization, although it was the cultural appeal of Latin in the early modern period that explains the massive lexical contribution of Latin to English. Later periods of foreign rule and migration following Viking settlement, mainly in the 9th and 10th centuries, and the Norman Conquest of 1066 brought English into contact with Norse and Old French, respectively. Lexical borrowing from these languages involved loans reflecting foreign rule but also basic everyday words. Extensive bilingualism and second-language learning most likely promoted the rapid loss of inflection that English underwent during the medieval period. Opinions usually vary, however, on whether contact brought about direct structural transfer or merely reinforced internal developments already in progress. Contact left its mark most noticeably on the lexicon of English; the influx of Latin and French loan vocabulary extensively reshaped the lexicon and, with it, the derivational morphology of English and explains the heavy Romance element in present-day English.

Article

Contrastive Specification in Phonology  

Daniel Currie Hall

The fundamental idea underlying the use of distinctive features in phonology is the proposition that the same phonetic properties that distinguish one phoneme from another also play a crucial role in accounting for phonological patterns. Phonological rules and constraints apply to natural classes of segments, expressed in terms of features, and involve mechanisms, such as spreading or agreement, that copy distinctive features from one segment to another. Contrastive specification builds on this by taking seriously the idea that phonological features are distinctive features. Many phonological patterns appear to be sensitive only to properties that crucially distinguish one phoneme from another, ignoring the same properties when they are redundant or predictable. For example, processes of voicing assimilation in many languages apply only to the class of obstruents, where voicing distinguishes phonemic pairs such as /t/ and /d/, and ignore sonorant consonants and vowels, which are predictably voiced. In theories of contrastive specification, features that do not serve to mark phonemic contrasts (such as [+voice] on sonorants) are omitted from underlying representations. Their phonological inertness thus follows straightforwardly from the fact that they are not present in the phonological system at the point at which the pattern applies, though the redundant features may subsequently be filled in either before or during phonetic implementation. In order to implement a theory of contrastive specification, it is necessary to have a means of determining which features are contrastive (and should thus be specified) and which ones are redundant (and should thus be omitted). A traditional and intuitive method involves looking for minimal pairs of phonemes: if [±voice] is the only property that can distinguish /t/ from /d/, then it must be specified on them. This approach, however, often identifies too few contrastive features to distinguish the phonemes of an inventory, particularly when the phonetic space is sparsely populated. For example, in the common three-vowel inventory /i a u/, there is more than one property that could distinguish any two vowels: /i/ differs from /a/ in both place (front versus back or central) and height (high versus low), /a/ from /u/ in both height and rounding, and /u/ from /i/ in both rounding and place. Because pairwise comparison cannot identify any features as contrastive in such cases, much recent work in contrastive specification is instead based on a hierarchical sequencing of features, with specifications assigned by dividing the full inventory into successively smaller subsets. For example, if the inventory /i a u/ is first divided according to height, then /a/ is fully distinguished from the other two vowels by virtue of being low, and the second feature, either place or rounding, is contrastive only on the high vowels. Unlike pairwise comparison, this approach produces specifications that fully distinguish the members of the underlying inventory, while at the same time allowing for the possibility of cross-linguistic variation in the specifications assigned to similar inventories.

Article

Conversational Implicature  

Nicholas Allott

Conversational implicatures (i) are implied by the speaker in making an utterance; (ii) are part of the content of the utterance, but (iii) do not contribute to direct (or explicit) utterance content; and (iv) are not encoded by the linguistic meaning of what has been uttered. In (1), Amelia asserts that she is on a diet, and implicates something different: that she is not having cake. (1)Benjamin:Are you having some of this chocolate cake?Amelia:I’m on a diet. Conversational implicatures are a subset of the implications of an utterance: namely those that are part of utterance content. Within the class of conversational implicatures, there are distinctions between particularized and generalized implicatures; implicated premises and implicated conclusions; and weak and strong implicatures. An obvious question is how implicatures are possible: how can a speaker intentionally imply something that is not part of the linguistic meaning of the phrase she utters, and how can her addressee recover that utterance content? Working out what has been implicated is not a matter of deduction, but of inference to the best explanation. What is to be explained is why the speaker has uttered the words that she did, in the way and in the circumstances that she did. Grice proposed that rational talk exchanges are cooperative and are therefore governed by a Cooperative Principle (CP) and conversational maxims: hearers can reasonably assume that rational speakers will attempt to cooperate and that rational cooperative speakers will try to make their contribution truthful, informative, relevant and clear, inter alia, and these expectations therefore guide the interpretation of utterances. On his view, since addressees can infer implicatures, speakers can take advantage of their ability, conveying implicatures by exploiting the maxims. Grice’s theory aimed to show how implicatures could in principle arise. In contrast, work in linguistic pragmatics has attempted to model their actual derivation. Given the need for a cognitively tractable decision procedure, both the neo-Gricean school and work on communication in relevance theory propose a system with fewer principles than Grice’s. Neo-Gricean work attempts to reduce Grice’s array of maxims to just two (Horn) or three (Levinson), while Sperber and Wilson’s relevance theory rejects maxims and the CP and proposes that pragmatic inference hinges on a single communicative principle of relevance. Conversational implicatures typically have a number of interesting properties, including calculability, cancelability, nondetachability, and indeterminacy. These properties can be used to investigate whether a putative implicature is correctly identified as such, although none of them provides a fail-safe test. A further test, embedding, has also been prominent in work on implicatures. A number of phenomena that Grice treated as implicatures would now be treated by many as pragmatic enrichment contributing to the proposition expressed. But Grice’s postulation of implicatures was a crucial advance, both for its theoretical unification of apparently diverse types of utterance content and for the attention it drew to pragmatic inference and the division of labor between linguistic semantics and pragmatics in theorizing about verbal communication.

Article

Conversation Analysis  

Jack Sidnell

Conversation analysis is an approach to the study of social interaction and talk-in-interaction that, although rooted in the sociological study of everyday life, has exerted significant influence across the humanities and social sciences including linguistics. Drawing on recordings (both audio and video) naturalistic interaction (unscripted, non-elicited, etc.) conversation analysts attempt to describe the stable practices and underlying normative organizations of interaction by moving back and forth between the close study of singular instances and the analysis of patterns exhibited across collections of cases. Four important domains of research within conversation analysis are turn-taking, repair, action formation and ascription, and action sequencing.