1-12 of 12 Results

  • Keywords: digital humanities x
Clear all

Article

Marlene Manoff

Archives and libraries operate within a complex web of social, political, and economic forces. The explosion of digital technologies, globalization, economic instability, consolidation within the publishing industry, increasing corporate control of the scholarly record, and the shifting copyright landscape are just some of the myriad forces shaping their evolution. Libraries and archives in turn have shaped the production of knowledge, participating in transformations in scholarship, publishing, and the nature of access to current and historical materials. Librarians and archivists increasingly recognize that they exist within institutional systems of power. Questioning long-held assumptions about library and archival neutrality and objectivity, they are working to expand access to previously marginalized materials, to educate users about the social and economic forces shaping their access to information, to raise awareness about bias in information tools and systems, and to empower disenfranchised communities. New technologies are transforming the practices of librarians and archivists as they restructure bibliographic systems for collecting, storing, and accessing information. Digitization has vastly expanded the volume of material libraries and archives make available to their communities. It has enabled the creation of tools to read or decipher material thought to have been damaged beyond repair as well as tools to annotate, manipulate, map, and mine a wide variety of textual and visual resources. Digitization has enhanced scholarship by expanding opportunities for collaboration and by altering the scale of potential research. Scholars have the ability to perform computational analyses on immense numbers of images and texts. Nevertheless, new technologies have also presaged a greater commodification of information, a worsening of the crisis in scholarly communication, the creation of platforms rife with hidden bias, fake news, plagiarism, surveillance, harassment, and security breaches. Moreover, the digital record is less stable than the printed record, complicating the development of systems for organizing and preserving information. Archivists and librarians are addressing these issues by acquiring new technical competencies, by undertaking a range of social and materialist critiques, and by promoting new information literacies to enable users to think critically about the political and social contexts of information production. In most 21st-century archives and libraries, traditional systems for stewarding analog materials coexist with newly developing methods for acquiring and preserving a range of digital formats and genres. Libraries provide access to printed books, journals, magazines, e-books, e-journals, databases, data sets, audiobooks, streaming audio and video files, as well as various other digital formats. Archives and special collections house rare and unique books and artifacts, paper and manuscript collections as well as their digital equivalents. Archives focus on permanently valuable records, including accounts, reports, letters, and photographs that may be of continuing value to the organizations that have created them or to other potential users.

Article

Lutz Koepnick

Digital reading has been an object of fervent scholarly and public debates since the mid-1990s. Often digital reading has been associated solely with what may happen between readers and screens, and in dominant approaches digital reading devices have been seen as producing radically different readers than printed books produce. Far from merely reducing digital reading to a mere matter of what e-books might do to the attention spans of individual readers, however, contemporary critiques emphasize how digital computing affects and is being affected by neurological, sensory, kinetic, and apparatical processes. The future of reading has too many different aspects to be discussed by scholars of one discipline or field of study alone. Digital reading is as much a matter for neurologists as for literary scholars, for engineers as much as ergonomicians, for psychologists, physiologists, media historians, art critics, critical theorists, and many others. Scholars of literature will need to consult many fields to elaborate a future poetics of digital reading and examine how literary texts in all their different forms are and will be met by 21st-century readers.

Article

Natural language generation (NLG) refers to the process in which computers produce output in readable human languages (e.g., English, French). Despite sounding as though they are contained within the realm of science fiction, computer-generated texts actually abound; business performance reports are generated by NLG systems, as are tweets and even works of longform prose. Yet many are altogether unaware of the increasing prevalence of computer-generated texts. Moreover, there has been limited scholarly consideration of the social and literary implications of NLG from a humanities perspective, despite NLG systems being in development for more than half a century. This article serves as one such consideration. Human-written and computer-generated texts represent markedly different approaches to text production that necessitate distinct approaches to textual interpretation. Characterized by production processes and labor economies that at times seem inconsistent with those of print culture, computer-generated texts bring conventional understandings of the author-reader relationship into question. But who—or what—is the author of the computer-generated text? This article begins with an introduction to NLG as it has been applied to the production of public-facing textual output. NLG’s unique potential for textual personalization is observed. The article then moves toward a consideration of authorship as the concept may be applied to computer-generated texts, citing historical and current legal discussions, as well as various interdisciplinary analyses of authorial attribution. This article suggests a semantic shift from considering NLG systems as tools to considering them as social agents in themselves: not to obsolesce human writers, but to recognize the particular contributions of NLG systems to the current socio-literary landscape. As this article shows, texts are regarded as fundamentally human artifacts. A computer-generated text is no less a human artifact than a human-written text, but its unconventional manifestation of humanity prompts calculated contemplation of what authorship means in an increasingly digital age.

Article

E-text  

Niels Ole Finnemann

Electronic text can be defined on two different, though interconnected, levels. On the one hand, electronic text can be defined by taking the notion of “text” or “printed text” as the point of departure. On the other hand, electronic text can be defined by taking the digital format as the point of departure, where everything is represented in the binary alphabet. While the notion of text in most cases lends itself to being independent of medium and embodiment, it is also often tacitly assumed that it is in fact modeled on the print medium, instead of, for instance, on hand-written text or speech. In late 20th century, the notion of “text” was subjected to increasing criticism, as can be seen in the question that has been raised in literary text theory about whether “there is a text in this class.” At the same time, the notion was expanded by including extralinguistic sign modalities (images, videos). A basic question, therefore, is whether electronic text should be included in the enlarged notion that text is a new digital sign modality added to the repertoire of modalities or whether it should be included as a sign modality that is both an independent modality and a container that can hold other modalities. In the first case, the notion of electronic text would be paradigmatically formed around the e-book, which was conceived as a digital copy of a printed book but is now a deliberately closed work. Even closed works in digital form will need some sort of interface and hypertextual navigation that together constitute a particular kind of paratext needed for accessing any sort of digital material. In the second case, the electronic text is defined by the representation of content and (some parts of the) processing rules as binary sequences manifested in the binary alphabet. This wider notion would include, for instance, all sorts of scanning results, whether of the outer cosmos or the interior of our bodies and of digital traces of other processes in-between (machine readings included). Since other alphabets, such as the genetic alphabet and all sorts of images may also be represented in the binary alphabet, such materials will also belong to the textual universe within this definition. A more intriguing implication is that born-digital materials may also include scripts and interactive features as intrinsic parts of the text. The two notions define the text on different levels: one is centered on the Latin, the other on the binary alphabet, and both definitions include hypertext, interactivity, and multimodality as constituent parameters. In the first case, hypertext is included as a navigational, paratextual device; whereas in the second case, hypertext is also incorporated in the narrative within an otherwise closed work or as a constituent element on the textual universe of the web, where it serves the ongoing production of (possibly scripted) connections and disconnections between blocks of textual content. Since the early decades of early 21st century still represent only the very early stages of the globally distributed universe of web texts, this is also a history of the gradual unfolding of the dimensions of these three constituencies—hypertext, interactivity, and multimodality. The result is a still-expanding repertoire of genres, including some that are emerging via path dependency; some via remediation; and some as new genres that are unique for networked digital media, including “social media texts” and a growing variety of narrative and discursive multiple-source systems.

Article

Adam Hammond

The concept of remediation, as elaborated by Jay David Bolter and Richard Grusin in Remediation: Understanding New Media (1999), is premised on the notion that media are best understood in interaction rather than in isolation. Every artistic medium, they argue, orients itself in relation to another medium, whether respectfully—as in the case of an online literary database that seeks to provide easy access to faithful facsimiles of manuscripts—or competitively—as with a videogame that seeks to replace the linearity and passivity of print with open-ended interactivity. Bolter and Grusin describe individual media in turns of two basic impulses: immediacy, or the attempt to erase the mediating function and present the illusion of directly represented reality; and hypermediacy, or the attempt to foreground the mediating function, exposing the impossibility of direct representation. They employ the same vocabulary to describe the interaction of media. Every act of remediation—every representation of one medium in another—necessarily involves both immediacy and hypermediacy. A digital edition of a literary text grants access to the words of the original print artifact (immediacy), yet by including audio readings and video commentary draws attention to its digital-specific affordances (hypermediacy). A digital archive gathers together high-resolution, color-accurate reproductions of materials scattered in rare-book libraries around the world (immediacy), yet by granting free and instantaneous access to these precious, fragile objects, fundamentally transforms the experience of engaging with their analog originals (hypermediacy). Insisting that one approach media through interaction, Bolter and Grusin’s theory of remediation positions the movement of content from one medium to another as a form of translation—a transformative act in which much is lost as well as gained.

Article

Ben Grant

Anthologies, in the broadest sense of collections of independent texts, have always played an important role in preserving and spreading the written word, and collections of short forms, such as proverbs, wise sayings, and epigraphs, have a long history. The literary anthology, however, is of comparatively recent provenance, having come to prominence only during the long 18th century, when the modern concept of “literature” itself emerged. Since that time, it has been a fundamental part of literary culture: not only have literary texts been published in anthologies, but also the genre of the anthology has done much to shape their form and content, and to influence the ways in which they are read and taught, particularly as literary criticism has developed in tandem with the rise of the anthology. The anthology has also stimulated innovation in many periods and places by providing a model for writers of different genres of literature to emulate, and it has been argued that the form of the novel is much indebted to the anthology. This is connected to its close association with the figure of the reader. Furthermore, anthologies have helped to define what literature is, and been crucial to the canonization of texts, authors, and genres, and the consolidation of literary traditions. It is therefore not surprising that they were at the heart of the theoretical and pedagogical debates within literary studies known as the canon wars, which raged during the 1980s and 1990s. In this role, they contributed much to discussions concerning the theories and politics of identity, and to such approaches as feminism and race studies. The connection between the anthology and literary theory extends beyond this, however: theory itself has been subject to widespread anthologization, which has affected its practice and reception; the form of theoretical writing can in certain respects be understood as anthological; and the anthology is an important object of theoretical attention. For instance, given the potential which the digital age holds to transform how texts are disseminated and consumed, and the importance of finding ways to classify and navigate the digital archive, anthology studies is likely to figure largely in the Digital Humanities.

Article

Patrick Jagoda

Networks influence practically every subfield of literary studies. Unlike hierarchies and centralized structures, networks connote decentralization and distribution. The abstraction of this form makes it applicable to a wide variety of phenomena. For example, the metaphor and form of the network informs the way we think about communication systems in early American writing, social networks in Victorian novels, transnational circulation in postcolonial literature, and computer networks in late 20th-century cyberpunk fiction. Beyond traditional literary genres, network form is also accessible through comparative media analysis. Films, television serials, video games, and transmedia narratives may represent or evoke network structures through medium-specific techniques. The juxtaposition of different literary and artistic forms, across media, helps to defamiliarize network forms and make these complex structures available to thought. Across subfields of literary studies, critics may be drawn to networks because of their resonance with histories of the present and contemporary technoscience. Scholars may also recognize the sense of complexity and interconnection inherent in networks, which resonates with experiences of intertextuality and close reading itself. In addition to studying representations of networks, literary critics employ a variety of network-related methods. These approaches include historicist scholarship that uses network structures to think about social organization and communication in different eras, quantitative digital humanities tools that map networks of literary circulation, qualitative sociology of literature and reader-response theory that analyze networks of readers and publishers, and formalist work that compares network and aesthetic forms.

Article

Harry Lönnroth

Philology—from the Greek words philologi’ā < philos “friend” and logos “word”—is a multi-faceted field of scholarship within the humanities which in its widest sense focuses on questions of time, history, and literature—with language as the common denominator. Philology is both an academic discipline—there is classical philology, Romance philology, Scandinavian philology, etc.—and a scholarly perspective on language, literature, and culture. The roots of philology go back all the way to the Library of Alexandria, Egypt, where philology began to evolve into a field of scholarship around 300 bce. In Alexandria, the foundations of philology were laid for centuries to come, for example as regards one of its major branches, textual criticism. A characteristic feature of philology past and present is that it focuses on texts in time from an interdisciplinary point of view, which is why philology as an umbrella term is relevant for many fields of scholarship in the 21st century. According to a traditional definition, a philologist is interested in the relationship between language and culture, and by means of language, he or she aims to understand the characteristics of the culture the language reflects. From this point of view, language is mainly a medium. In the analysis of (mostly very old) texts, a philologist often crosses disciplinary borders of different kinds—anthropology, archaeology, ethnology, folkloristics, history, etc.—and makes use of other special fields within manuscript studies, such as codicology (the archaeology of the book), diplomatics (the analysis of documents), paleography (the study of handwriting), philigranology (watermarks), and sphragistics (seals). For a philologist, texts and their languages and contents bear witness to past times, and the philologist’s perspective is often a wide one. The expertise of a philologist is the ability to analyze texts in their cultural-historical contexts, not only from a linguistic perspective (which is a prerequisite for a deep understanding of a text), but also from a cultural and historical perspective, and to explain the role of a text in its cultural-historical setting. In the course of history, philologists have made several contributions to our knowledge of ancient and medieval texts and writing, for example. In the 2010s, the focus in philology is for example on the so-called New Philology or Material Philology and digital philology, but the core of philology remains the same: philology is the art of reading slowly.

Article

Daniel Tiffany

Lyric poetry is an ancient genre, enduring to the present day, but it is not continuous in its longevity. What happens to lyric poetry and how it changes during its numerous and sometimes lengthy periods of historical eclipse (such as the 18th century) may be as important to our understanding of lyric as an assessment of its periods of high achievement. For it is during these periods of relative obscurity that lyric enters into complex relations with other genres of poetry and prose, affirming the general thesis that all genres are relational and porous. The question of whether any particular properties of lyric poetry endure throughout its 2,700-year checkered history can be addressed by examining its basic powers: its forms; its figurative and narrative functions; and its styles and diction. The hierarchy of these functions is mutable, as one finds in today’s rift between a scholarly revival of formalist analysis and the increasing emphasis on diction in contemporary poetry. As a way of assessing lyric poetry’s basic operations, the present article surveys the ongoing tension between form and diction by sketching a critique of the tenets of New Formalism in literary studies, especially its presumptions about the relation of poetic form to the external world and its tendency to subject form to close analysis, as if it could yield, like style or diction, detailed knowledge of the world. Long overshadowed by the doctrinal tenets of modernist formalism, the expressive powers of diction occupy a central place in contemporary concerns about identity and social conflict, at the same time that diction (unlike form) is especially susceptible to the vocabularistic methods of “distant reading”—to the computational methods of the digital humanities. The indexical convergence of concreteness and abstraction, expression and rationalism, proximity and distance, in these poetic and scholarly experiments with diction points to precedents in the 18th century, when the emergence of Anglophone poetries in the context of colonialism and the incorporation of vernacular languages into poetic diction (via the ballad revival) intersected with the development of modern lexicography and the establishment of Standard English. The nascent transactions of poetics and positivism through the ontology of diction in the 21st century remind us that poetic diction is always changing but also that the hierarchy of form, figuration, and diction in lyric poetry inevitably shifts over time—a reconfiguration of lyric priorities that helps to shape the premises and methods of literary studies.

Article

Christopher B. Patterson

Asian Americans have frequently been associated with video games. As designers they are considered overrepresented, and specific groups appear to dominate depictions of the game designer, from South Asian and Chinese immigrants working for Microsoft and Silicon Valley to auteur designers from Japan, Taiwan, and Iran, who often find themselves with celebrity status in both America and Asia. As players, Asian Americans have been depicted as e-sports fanatics whose association with video game expertise—particularly in games like Starcraft, League of Legends, and Counter-Strike—is similar to sport-driven associations of racial minorities: African Americans and basketball or Latin Americans and soccer. This immediate association of Asian Americans with gaming cultures breeds a particular form of techno-orientalism, defined by Greta A. Niu, David S. Roh, and Betsy Huang as “the phenomenon of imagining Asia and Asians in hypo- or hypertechnological terms in cultural productions and political discourse.” In sociology, Asian American Studies scholars have considered how these gaming cultures respond to a lack of acceptance in “real sports” and how Asian American youth have fostered alternative communities in PC rooms, arcades, and online forums. For still others, this association also acts as a gateway for non-Asians to enter a “digital Asia,” a space whose aesthetics and forms are firmly intertwined with Japanese gaming industries, thus allowing non-Asian subjects to inhabit “Asianness” as a form of virtual identity tourism. From a game studies point of view, video games as transnational products using game-centered (ludic) forms of expression push scholars to think beyond the limits of Asian American Studies and subjectivity. Unlike films and novels, games do not rely upon representations of minority figures for players to identify with, but instead offer avatars to play with through styles of parody, burlesque, and drag. Games do not communicate through plot and narrative so much as through procedures, rules, and boundaries so that the “open world” of the game expresses political and social attitudes. Games are also not nationalized in the same way as films and literature, making “Asian American” themes nearly indecipherable. Games like Tetris carry no obvious national origins (Russian), while games like Call of Duty and Counter-Strike do not explicitly reveal or rely upon the ethnic identities of their Asian North American designers. Games challenge Asian American Studies as transnational products whose authors do not identify explicitly as Asian American, and as a form of artistic expression that cannot be analyzed with the same reliance on stereotypes, tropes, and narrative. It is difficult to think of “Asian American” in the traditional sense with digital games. Games provide ways of understanding the Asian American experience that challenge traditional meanings of being Asian American, while also offering alternative forms of community through transethnic (not simply Asian) and transnational (not simply American) modes of belonging.

Article

Peta Mitchell

Since around 1970, and across a broad spectrum of humanities and social sciences disciplines, there has been an ongoing and critical reassessment of the role played by space, place, and geography in the formation and unfolding of human knowledge, subjectivity, and social relations. Starting with the identification of a distinctive “spatial turn” within critical and social theory in the second half of the 20th century, it has become a commonplace to recognize space as being political and as having a particular affective and effective power. A distinctive constellation of socio-technological changes at the start of the 20th century brought the question of space to the critical foreground, and, by the end of the 20th century, a loosely defined and interdisciplinary “spatial theory” had emerged, while a number of fields across the humanities and social sciences had avowedly undergone their own “spatial turns.” More recently, new critical approaches have emerged that foreground the geo- as both a starting point and method for critical analysis as well as new inter-disciplines—namely the geohumanities and spatial humanities—that provide a focus for the range of work being done at the interstices of geography and the humanities. With the rise to ubiquity of geospatial and geolocative technologies since around 2005—and their almost wholesale penetration into everyday life in the global North in the form of the GPS-enabled smartphone—the question of the geo- and its role in locating and mediating human experience, knowledge, and social relations has become ever more salient. In an era where the geo- becomes geolocation, and is increasingly defined by networked relations among humans, digital media, and their locational data traces, new approaches and schools of thought that transect geography, digital media, and critical and cultural theory have once more emerged, constituting what may be thought of as a new, digital spatial turn. Charting the trajectory of the geo- as a key site and mode of critique across and through these often overlapping “spatial turns”—across time, space, and disciplinary boundaries—is itself a work of geolocation.

Article

Susan David Bernstein and Julia McCord Chavez

Serialization, a publication format that came to dominate the Victorian literary marketplace following its deft adoption by marketing master Charles Dickens in the 1830s, is a transcendent form. It moves across not only print formats and their temporal cycles of distribution (daily or weekly installments in periodicals, monthly part-issue numbers, volumes), but also historical time and place. The number and varieties of serial publications multiplied during the middle of the 19th century due to the improved technology of printing, the cheaper cost of paper production, and the abolition of taxes on advertising. Moreover, serialization continues to be a staple in popular culture today; the long-form serial on television may be the most obvious descendent of the Victorian novel issued in parts. The history of the Victorian serial in its many forms spans from its roots in the 18th century to its reconfiguration following the advent of radio, television, and the internet. The most prevalent accounts of the serial have focused on the economics of the literary marketplace and print culture including the sharp increase of periodicals at midcentury. In recent years, scholars have come to understand the serial as a reflection of historically specific concepts of time and space, as an important location of experimentation and collaboration, as a book technology that fosters critical thinking and active reading, and as an object of transatlantic, even global, circulation. New studies of serial forms include digital approaches to analysis, web-based resources that facilitate serial reading, and comparative work on 21st-century media that underscores the continued role of serialization to create imagined communities within cultural life.