Human natural languages come in two forms: spoken languages and signed languages, which are the visual-gestural languages used mainly by Deaf communities. Modern signed language linguistics only began around 1960. Studies have shown that signed languages share similarities with spoken languages at all levels of linguistic description, but that modality—whether vocal-auditory or visual-gestural—plays a role in some of the differences between spoken and signed languages. For example, signed languages show a more simultaneous organization than spoken languages, and iconicity and the use of space play a more important role. The study of signed languages is therefore an important addition to our knowledge of human language in general. Based on the research already carried out, it seems that different signed languages are structurally more similar to each other than different spoken languages. The striking similarities between signed languages have been attributed to several factors, including the affordances of the visual-gestural modality. However, more recent research has also shown differences between signed languages. Some of these may be due to independent diachronic changes in individual signed languages, others to influences from spoken languages. Indeed, for most signed languages there is an intensive contact with at least one, and sometimes several, spoken languages, which undoubtedly influence the signed languages, especially at the lexical level. However, the influence, whether lexical or grammatical, has been explored to a limited extent. It is particularly interesting to examine the extent to which unrelated signed languages are similar and different, and whether contact with the surrounding spoken languages plays a role in this.
Danish Sign Language and Flemish Sign Language are two signed languages that are not related. By contrast, Danish and Dutch both belong to the Germanic language family, Danish as a North Germanic language, Dutch as a West Germanic language. Some of the features shared by the two signed languages can be explained as modality dependent: they both use spatial morphology to express agreement and complex verbs of motion and location, and both use nonmanual features, that is, facial expression, gaze direction, and head movement, to express, for instance, topicalization and clause boundaries. Other shared features may not be explained as modality dependent in any straightforward way; this is the case with their preference for sentence-final repetition of pronouns and verbs. Moreover, the two signed languages share features that distinguish them from most Germanic languages: they lack a clear subject category and prototypical passive constructions, and they do not have V2-organization with the finite verb in the second position of declarative clauses. Much more research, especially research based on large annotated corpora, is needed to clarify the reasons why unrelated signed languages share many grammatical features, and the influences from spoken languages on signed languages.
Article
Signed Languages in Co-Existence With Germanic Languages: A Typological Perspective
Myriam Vermeerbergen and Elisabeth Engberg-Pedersen
Article
Arthur Abramson
Philip Rubin
Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis.
Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.
Article
Neurolinguistic Research on the Romance Languages
Valentina Bambini and Paolo Canal
Neurolinguistics is devoted to the study of the language-brain relationship, using the methodologies of neuropsychology and cognitive neuroscience to investigate how linguistic categories are grounded in the brain. Although the brain infrastructure for language is invariable across cultures, neural networks might operate differently depending on language-specific features. In this respect, neurolinguistic research on the Romance languages, mostly French, Italian, and Spanish, proved key to progress the field, especially with specific reference to how the neural infrastructure for language works in the case of more richly inflected systems than English.
Among the most popular domains of investigation are agreement patterns, where studies on Spanish and Italian showed that agreement across features and domains (e.g., number or gender agreement) engages partially different neural substrates. Also, studies measuring the electrophysiological response suggested that agreement processing is a composite mechanism involving different temporal steps. Another domain is the noun-verb distinction, where studies on the Romance languages indicated that the brain is more sensitive to the greater morphosyntactic engagement of verbs compared with nouns rather than to the grammatical class distinction per se.
Concerning language disorders, the Romance languages shed new light on inflectional errors in aphasic speakers and contributed to revise the notion of agrammatism, which is not simply omission of morphemes but might involve incorrect substitution from the inflectional paradigm. Also, research in the Romance domain showed variation in degree and pattern of reading impairments due to language-specific segmental and suprasegmental features.
Despite these important contributions, the Romance family, with its multitude of languages and dialects and a richly documented diachronic evolution, is a still underutilized ‘treasure house’ for neurolinguistic research, with significant room for investigations exploring the brain signatures of language variation in time and space and refining the linking between linguistic categories and neurobiological primitives.
Article
Tongue Muscle Anatomy: Architecture and Function
Maureen Stone
The tongue is composed entirely of soft tissue: muscle, fat, and connective tissue. This unusual composition and the tongue’s 3D muscle fiber orientation result in many degrees of freedom. The lack of bones and cartilage means that muscle shortening creates deformations, particularly local deformations, as the tongue moves into and out of speech gestures. The tongue is also surrounded by the hard structures of the oral cavity, which both constrain its motion and support the rapid small deformations that create speech sounds. Anatomical descriptors and categories of tongue muscles do not correlate with tongue function as speech movements use finely controlled co-contractions of antagonist muscles to move the oral structures during speech. Tongue muscle volume indicates that four muscles, the genioglossus, verticalis, transversus, and superior longitudinal, occupy the bulk of the tongue. They also comprise a functional muscle grouping that can shorten the tongue in the x, y, and z directions. Various 3D muscle shortening patterns produce large- or small-scale deformations in all directions of motion. The interdigitation of the tongue’s muscles is advantageous in allowing co-contraction of antagonist muscles and providing nimble deformational changes to move the tongue toward and away from any position.
Article
Phonetics of Sign Language
Martha Tyrone
Sign phonetics is the study of how sign languages are produced and perceived, by native as well as by non-native signers. Most research on sign phonetics has focused on American Sign Language (ASL), but there are many different sign languages around the world, and several of these, including British Sign Language, Taiwan Sign Language, and Sign Language of the Netherlands, have been studied at the level of phonetics. Sign phonetics research can focus on individual lexical signs or on the movements of the nonmanual articulators that accompany those signs. The production and perception of a sign language can be influenced by phrase structure, linguistic register, the signer’s linguistic background, the visual perception mechanism, the anatomy and physiology of the hands and arms, and many other factors. What sets sign phonetics apart from the phonetics of spoken languages is that the two language modalities use different mechanisms of production and perception, which could in turn result in structural differences between modalities. Most studies of sign phonetics have been based on careful analyses of video data. Some studies have collected kinematic limb movement data during signing and carried out quantitative analyses of sign production related to, for example, signing rate, phonetic environment, or phrase position. Similarly, studies of sign perception have recorded participants’ ability to identify and discriminate signs, depending, for example, on slight variations in the signs’ forms or differences in the participants’ language background. Most sign phonetics research is quantitative and lab-based.
Article
Cognitively Oriented Theories of Meaning
Peter Gärdenfors
There are two main theoretical traditions in semantics. One is based on realism, where meanings are described as relations between language and the world, often in terms of truth conditions. The other is cognitivistic, where meanings are identified with mental structures. This article presents some of the main ideas and theories within the cognitivist approach.
A central tenet of cognitively oriented theories of meaning is that there are close connections between the meaning structures and other cognitive processes. In particular, parallels between semantics and visual processes have been studied. As a complement, the theory of embodied cognition focuses on the relation between actions and components of meaning.
One of the main methods of representing cognitive meaning structures is to use images schemas and idealized cognitive models. Such schemas focus on spatial relations between various semantic elements. Images schemas are often constructed using Gestalt psychological notions, including those of trajector and landmark, corresponding to figure and ground. In this tradition, metaphors and metonymies are considered to be central meaning transforming processes.
A related approach is force dynamics. Here, the semantic schemas are construed from forces and their relations rather than from spatial relations. Recent extensions involve cognitive representations of actions and events, which then form the basis for a semantics of verbs.
A third approach is the theory of conceptual spaces. In this theory, meanings are represented as regions of semantic domains such as space, time, color, weight, size, and shape. For example, strong evidence exists that color words in a large variety of languages correspond to such regions. This approach has been extended to a general account of the semantics of some of the main word classes, including adjectives, verbs, and prepositions. The theory of conceptual spaces shows similarities to the older frame semantics and feature analysis, but it puts more emphasis on geometric structures.
A general criticism against cognitive theories of semantics is that they only consider the meaning structures of individuals, but neglect the social aspects of semantics, that is, that meanings are shared within a community. Recent theoretical proposals counter this by suggesting that semantics should be seen as a meeting of minds, that is, communicative processes that lead to the alignment of meanings between individuals. On this approach, semantics is seen as a product of communication, constrained by the cognitive mechanisms of the individuals.
Article
Biolinguistics
Cedric Boeckx and Pedro Tiago Martins
All humans can acquire at least one natural language. Biolinguistics is the name given to the interdisciplinary enterprise that aims to unveil the biological bases of this unique capacity.