Arthur Seymour Abramson (1925–2017) was an American linguist who was prominent in the international experimental phonetics research community. He was best known for his pioneering work, with Leigh Lisker, on voice onset time (VOT), and for his many years spent studying tone and voice quality in languages such as Thai. Born and raised in Jersey City, New Jersey, Abramson served several years in the Army during World War II. Upon his return to civilian life, he attended Columbia University (BA, 1950; PhD, 1960). There he met Franklin Cooper, an adjunct who taught acoustic phonetics while also working for Haskins Laboratories. Abramson started working on a part-time basis at Haskins and remained affiliated with the institution until his death. For his doctoral dissertation (1962), he studied the vowels and tones of the Thai language, which would sit at the heart of his research and travels for the rest of his life. He would expand his investigations to include various languages and dialects, such as Pattani Malay and the Kuai dialect of Suai, a Mon-Khmer language. Abramson began his collaboration with University Pennsylvania linguist Leigh Lisker at Haskins Laboratories in the 1960s. Using their unique VOT technique, a sensitive measure of the articulatory timing between an occlusion in the vocal tract and the beginning of phonation (characterized by the onset of vibration of the vocal folds), they studied the voicing distinctions of various languages. Their long standing collaboration continued until Lisker’s death in 2006. Abramson and colleagues often made innovative use of state-of-art tools and technologies in their work, including transillumination of the larynx in running speech, X-ray movies of speakers in several languages/dialects, electroglottography, and articulatory speech synthesis.
Abramson’s career was also notable for the academic and scientific service roles that he assumed, including membership on the council of the International Phonetic Association (IPA), and as a coordinator of the effort to revise the International Phonetic Alphabet at the IPA’s 1989 Kiel Convention. He was also editor of the journal Language and Speech, and took on leadership roles at the Linguistic Society of America and the Acoustical Society of America. He was the founding Chair of the Linguistics Department at the University of Connecticut, which became a hotbed for research in experimental phonetics in the 1970s and 1980s because of its many affiliations with Haskins Laboratories. He also served for many years as a board member at Haskins, and Secretary of both the Board and the Haskins Corporation, where he was a friend and mentor to many.
Article
Arthur Abramson
Philip Rubin
Article
Signed Languages in Co-Existence With Germanic Languages: A Typological Perspective
Myriam Vermeerbergen and Elisabeth Engberg-Pedersen
Human natural languages come in two forms: spoken languages and signed languages, which are the visual-gestural languages used mainly by Deaf communities. Modern signed language linguistics only began around 1960. Studies have shown that signed languages share similarities with spoken languages at all levels of linguistic description, but that modality—whether vocal-auditory or visual-gestural—plays a role in some of the differences between spoken and signed languages. For example, signed languages show a more simultaneous organization than spoken languages, and iconicity and the use of space play a more important role. The study of signed languages is therefore an important addition to our knowledge of human language in general. Based on the research already carried out, it seems that different signed languages are structurally more similar to each other than different spoken languages. The striking similarities between signed languages have been attributed to several factors, including the affordances of the visual-gestural modality. However, more recent research has also shown differences between signed languages. Some of these may be due to independent diachronic changes in individual signed languages, others to influences from spoken languages. Indeed, for most signed languages there is an intensive contact with at least one, and sometimes several, spoken languages, which undoubtedly influence the signed languages, especially at the lexical level. However, the influence, whether lexical or grammatical, has been explored to a limited extent. It is particularly interesting to examine the extent to which unrelated signed languages are similar and different, and whether contact with the surrounding spoken languages plays a role in this.
Danish Sign Language and Flemish Sign Language are two signed languages that are not related. By contrast, Danish and Dutch both belong to the Germanic language family, Danish as a North Germanic language, Dutch as a West Germanic language. Some of the features shared by the two signed languages can be explained as modality dependent: they both use spatial morphology to express agreement and complex verbs of motion and location, and both use nonmanual features, that is, facial expression, gaze direction, and head movement, to express, for instance, topicalization and clause boundaries. Other shared features may not be explained as modality dependent in any straightforward way; this is the case with their preference for sentence-final repetition of pronouns and verbs. Moreover, the two signed languages share features that distinguish them from most Germanic languages: they lack a clear subject category and prototypical passive constructions, and they do not have V2-organization with the finite verb in the second position of declarative clauses. Much more research, especially research based on large annotated corpora, is needed to clarify the reasons why unrelated signed languages share many grammatical features, and the influences from spoken languages on signed languages.