1-20 of 20 Results  for:

  • Literary Theory x
  • Print Culture and Digital Humanities x
Clear all

Article

Ross Posnock

Like cosmopolitan, sophistication is a fighting word in American culture, a phrase that discomfits, raises eyebrows. It is not who we are, as President Obama used to say, for it smacks of elitism. Whereas the first word has had a stormy modern history—Stalin, for instance, used cosmopolitan as a code word for Jew—sophistication has always kept bad company, starting with its etymology. Its first six letters saddle it with sophistry, both tarred with the same brush of suspicion. Sophistry was a form of rhetoric that attracted the enmity of Socrates and Plato, with repercussions deep into the 17th century. In 1689, when John Locke said rhetoric trafficked in error and deceit, he was echoing the Greeks who tended to dismiss the art of persuasion and eloquence in general as sophistry, morally debased discourse. In the West, rhetoric, sophistry, and sophistication are arraigned as a shared locus of antinature: empty style, deceptive artifice, effeminate preening. They all testify to the deforming demands of social life, the worldliness disdained by Christian moralists, starting with Augustine, as concupiscence. This is the fall into sin from the prelapsarian transparency of Adam and Eve’s spiritual union of pure intellection with God, the perfection of reason that permits transcendence of the bodily senses. The corporeal senses and imagination dominate when man gives himself over to the world’s noise and confusion and is distracted from self-communion in company with God. Given that sophistication’s keynote is effortless ease, from the point of view of Augustinian Christianity such behavior in a basic sense violates Christian humility after the fall: with man’s loss of repose in God comes permanent uneasiness, inquiétude as Blaise Pascal and Michel de Montaigne put it, a chronic dissatisfaction and ennui that seeks relief in trivial divertissement (distraction), convictions that Montesquieu, Locke, and Tocqueville drew on for their root assumptions about how secular political institutions shape their citizens’ psyches. American Puritanism is in part an “Augustinian strain of piety,” as Perry Miller showed in his classic study, The New England Mind, hence suspicious of any distraction from worship of God. Puritans banned theaters two years after the nation was founded. Keeping vigilant watch over stirrings of New World worldliness, they permanently placed sophistication in the shadow of a double burden: Christian interdiction on top of the pre-Christian opprobrium heaped on sophistic rhetoric. Only by the mid-19th century does sophistication finally shed, though never definitively, sophistry’s fraudulence and deception and acquire positive qualities—worldly wisdom, refinement, subtlety, expertise. The year 1850 is the earliest positive use the Oxford English Dictionary lists, instanced by a sentence from Leigh Hunt’s Autobiography: “A people who . . . preserve in the very midst of their sophistication a frankness distinct from it.”

Article

Ben Grant

Anthologies, in the broadest sense of collections of independent texts, have always played an important role in preserving and spreading the written word, and collections of short forms, such as proverbs, wise sayings, and epigraphs, have a long history. The literary anthology, however, is of comparatively recent provenance, having come to prominence only during the long 18th century, when the modern concept of “literature” itself emerged. Since that time, it has been a fundamental part of literary culture: not only have literary texts been published in anthologies, but also the genre of the anthology has done much to shape their form and content, and to influence the ways in which they are read and taught, particularly as literary criticism has developed in tandem with the rise of the anthology. The anthology has also stimulated innovation in many periods and places by providing a model for writers of different genres of literature to emulate, and it has been argued that the form of the novel is much indebted to the anthology. This is connected to its close association with the figure of the reader. Furthermore, anthologies have helped to define what literature is, and been crucial to the canonization of texts, authors, and genres, and the consolidation of literary traditions. It is therefore not surprising that they were at the heart of the theoretical and pedagogical debates within literary studies known as the canon wars, which raged during the 1980s and 1990s. In this role, they contributed much to discussions concerning the theories and politics of identity, and to such approaches as feminism and race studies. The connection between the anthology and literary theory extends beyond this, however: theory itself has been subject to widespread anthologization, which has affected its practice and reception; the form of theoretical writing can in certain respects be understood as anthological; and the anthology is an important object of theoretical attention. For instance, given the potential which the digital age holds to transform how texts are disseminated and consumed, and the importance of finding ways to classify and navigate the digital archive, anthology studies is likely to figure largely in the Digital Humanities.

Article

Nicholas Dames

First known as a kephalaion in Greek, capitulum or caput in Latin, the chapter arose in antiquity as a finding device within long, often heterogenous prose texts, prior even to the advent of the codex. By the 4th century ce, it was no longer unusual for texts to be composed in capitula; but it is with the advent of the fictional prose narratives we call the novel that the chapter, both ubiquitous and innocuous, developed into a compositional practice with a distinct way of thinking about biographical time. A technique of discontinuous reading or “consultative access” which finds a home in a form for continuous, immersive reading, the chapter is a case study in adaptive reuse and slow change. One of the primary ways the chapter became a narrative form rather than just an editorial practice is through the long history of the chaptering of the Bible, particularly the various systems for chaptering the New Testament, which culminated in the early 13th century formation of the biblical chaptering system still in use across the West. Biblical chapters formed a template for how to segment ongoing plots or actions which was taken up by writers, printers, and editors from the late medieval period onward; pivotal examples include William Caxton’s chaptering of Thomas Malory’s Morte d’Arthur in his 1485 printing of the text, or the several mises en proses of Chrétien de Troyes’s poems carried out in the Burgundian court circle of the 15th century. By the 18th century, a vibrant set of discussions, controversies, and experiments with chapters were characteristic of the novel form, which increasingly used chapter titles and chapter breaks to meditate upon how different temporal units understand human agency in different ways. With the eventual dominance of the novel in 19th-century literary culture, the chapter had been honed into a way of thinking about the segmented nature of biographical memory, as well as the temporal frames—the day, the year, the episode or epoch—in which that segmenting occurs; chapters in this period were of an increasingly standard size, although still lacking any formal rules or definition. Modernist prose narratives often played with the chapter form, expanding it or drastically shortening it, but these experiments usually tended to reaffirm the unit of the chapter as a significant measure by which we make sense of human experience.

Article

Mark Byron

Close reading describes a set of procedures and methods that distinguishes the scholarly apprehension of textual material from the more prosaic reading practices of everyday life. Its origins and ancestry are rooted in the exegetical traditions of sacred texts (principally from the Hindu, Jewish, Buddhist, Christian, Zoroastrian, and Islamic traditions) as well as the philological strategies applied to classical works such as the Homeric epics in the Greco-Roman tradition, or the Chinese 詩經 (Shijing) or Classic of Poetry. Cognate traditions of exegesis and commentary formed around Roman law and the canon law of the Christian Church, and they also find expression in the long tradition of Chinese historical commentaries and exegeses on the Five Classics and Four Books. As these practices developed in the West, they were adapted to medieval and early modern literary texts from which the early manifestations of modern secular literary analysis came into being in European and American universities. Close reading comprises the methodologies at the center of literary scholarship as it developed in the modern academy over the past one hundred years or so, and has come to define a central set of practices that dominated scholarly work in English departments until the turn to literary and critical theory in the late 1960s. This article provides an overview of these dominant forms of close reading in the modern Western academy. The focus rests upon close reading practices and their codification in English departments, although reference is made to non-Western reading practices and philological traditions, as well as to significant nonanglophone alternatives to the common understanding of literary close reading.

Article

Kim Treiger-Bar-Am

Copyright gives an author control over the presentation of her work. Economic rights afford control over copies, and the noneconomic rights known as moral rights afford control over changes. An author’s moral rights remain with her even after she sells her economic rights in copyright. The excessive control that copyright offers to copyright owners may be limited by cementing these authorial rights, for all authors. Some elements of copyright law allow the meaning of a work as perceived by its audience to develop and evolve. The strengthening of that support by extending rights to the public will further restrict copyright’s excesses.

Article

Charlie Blake

From its emergence and early evolution in and through the writings of Immanuel Kant, Ludwig Feuerbach, and Karl Marx, critique established its parameters very early on as both porous and dynamic. Critique has always been, in this sense, mutable, directed, and both multidisciplinary and transdisciplinary, and this very fluidity and flexibility of its processes are possibly among the central reasons for its continuous relevance even when it has been dismantled, rebuffed, and attacked for embodying traits, from gender bias to Eurocentrism to neuro-normativity, that seem to indicate the very opposite of that flexibility. Indeed, once it is examined closely as an apparatus, the mechanism of critique will invariably reveal itself as having always contained the tools for its own opposition and even the tools for its own destruction. Critique has in this way always implied both its generality as a form and autocritique as an essential part of its process. For the past two centuries this general, self-reflective, and self-dismantling quality has led to its constant reinvention and re-adaptation by a wide range of thinkers and writers and across a broad range of disciplines. In the case of literature and literary theory, its role can often best be grasped as that of a meta-discourse in which the nature and purpose of literary criticism is shadowed, reflected upon, and performed. From this perspective, from the 18th-century origins of critique in its gestation in the fields of theology and literary criticism to its formalization by Kant, the literary expression of critique has always been bound up with debates over the function of literary texts, their history, their production, their consumption, and their critical evaluation. In the early 21st century, having evolved from its beginnings through and alongside various forms of anticritique in the 20th century, critique now finds itself in an age that favors some variant or other of postcritique. It remains to be seen whether this tendency, which suggests its obsolescence and superseding, marks the end of critique as some would wish or merely its latest metamorphosis and diversification in response to the multivalent pressures of digital acceleration and ecological crisis. Whatever path or paths contemporary judgment on this question may follow, critique as the name of a series of techniques and operations guided by a desire for certain ends is likely to remain one of the most consistent ways of surveying any particular field of intellectual endeavor and the relations between adjacent or even divergent fields in terms of their commonalities and differences. As Kant and Voltaire understood so well of their own age, modernity is characterized in the first instance by its will to criticism and then by the systematic criticism of the conditions for that criticism. By the same token now in late or post- or neo-modernity, if contemporary conversations about literature and its pleasures, challenges, study, and criticism require an overview, then some version of critique or its legacy will undoubtedly still come into play.

Article

Simon Burrows and Michael Falk

The article offers a definition, overview, and assessment of the current state of digital humanities, particularly with regard to its actual and potential contribution to literary studies. It outlines the history of humanities computing and digital humanities, its evolution as a discipline, including its institutional development and outstanding challenges it faces. It also considers some of the most cogent critiques digital humanities has faced, particularly from North American-based literary scholars, some of whom have suggested it represents a threat to centuries-old traditions of humanistic inquiry and particularly to literary scholarship based on the tradition of close reading. The article shows instead that digital humanities approaches gainfully employed offer powerful new means of illuminating both context and content of texts, to assist with both close and distant readings, offering a supplement rather than a replacement for traditional means of literary inquiry. The digital techniques it discusses include stylometry, topic modeling, literary mapping, historical bibliometrics, corpus linguistic techniques, and sequence alignment, as well as some of the contributions that they have made. Further, the article explains how many key aspirations of digital humanities scholarship, including interoperability and linked open data, have yet to be realized, and it considers some of the projects that are currently making this possible and the challenges that they face. The article concludes on a slightly cautionary note: What are the implications of the digital humanities for literary study? It is too early to tell.

Article

E-text  

Niels Ole Finnemann

Electronic text can be defined on two different, though interconnected, levels. On the one hand, electronic text can be defined by taking the notion of “text” or “printed text” as the point of departure. On the other hand, electronic text can be defined by taking the digital format as the point of departure, where everything is represented in the binary alphabet. While the notion of text in most cases lends itself to being independent of medium and embodiment, it is also often tacitly assumed that it is in fact modeled on the print medium, instead of, for instance, on hand-written text or speech. In late 20th century, the notion of “text” was subjected to increasing criticism, as can be seen in the question that has been raised in literary text theory about whether “there is a text in this class.” At the same time, the notion was expanded by including extralinguistic sign modalities (images, videos). A basic question, therefore, is whether electronic text should be included in the enlarged notion that text is a new digital sign modality added to the repertoire of modalities or whether it should be included as a sign modality that is both an independent modality and a container that can hold other modalities. In the first case, the notion of electronic text would be paradigmatically formed around the e-book, which was conceived as a digital copy of a printed book but is now a deliberately closed work. Even closed works in digital form will need some sort of interface and hypertextual navigation that together constitute a particular kind of paratext needed for accessing any sort of digital material. In the second case, the electronic text is defined by the representation of content and (some parts of the) processing rules as binary sequences manifested in the binary alphabet. This wider notion would include, for instance, all sorts of scanning results, whether of the outer cosmos or the interior of our bodies and of digital traces of other processes in-between (machine readings included). Since other alphabets, such as the genetic alphabet and all sorts of images may also be represented in the binary alphabet, such materials will also belong to the textual universe within this definition. A more intriguing implication is that born-digital materials may also include scripts and interactive features as intrinsic parts of the text. The two notions define the text on different levels: one is centered on the Latin, the other on the binary alphabet, and both definitions include hypertext, interactivity, and multimodality as constituent parameters. In the first case, hypertext is included as a navigational, paratextual device; whereas in the second case, hypertext is also incorporated in the narrative within an otherwise closed work or as a constituent element on the textual universe of the web, where it serves the ongoing production of (possibly scripted) connections and disconnections between blocks of textual content. Since the early decades of early 21st century still represent only the very early stages of the globally distributed universe of web texts, this is also a history of the gradual unfolding of the dimensions of these three constituencies—hypertext, interactivity, and multimodality. The result is a still-expanding repertoire of genres, including some that are emerging via path dependency; some via remediation; and some as new genres that are unique for networked digital media, including “social media texts” and a growing variety of narrative and discursive multiple-source systems.

Article

Early critics of the Porfirio Díaz regime and editors of the influential newspaper Regeneración, Ricardo and Enrique Flores Magón escaped to the United States in 1904. Here, with Ricardo as the leader and most prolific writer, they founded the Partido Liberal Mexicano (PLM) in 1906 and facilitated oppositional transnational networks of readers, political clubs, and other organizations. From their arrival they were constantly pursued and imprisoned by coordinated Mexican and US law enforcement and private detective agencies, but their cause gained US radical and worker support. With the outbreak of the 1910 Mexican Revolution the PLM splintered, with many members joining Madero’s forces, while the Flores Magón brothers and the PLM nucleus refused to compromise. They had moved beyond a liberal critique of a dictatorship to an anarchist oppositional stance to the state and private property. While not called Magonismo at the time, their ideological and organizational principles left a legacy in both Mexico and the United States greatly associated with the brothers. During World War I, a time of a growing nativist red scare in the United States, they turned from a relative nuisance to a foreign radical threat to US authorities. Ricardo died in Leavenworth federal penitentiary in 1922 and Enrique was deported to Mexico, where he promoted the brothers’ legacy within the postrevolutionary order. Although the PLM leadership opposed the new regime, their 1906 Program inspired much of the 1917 Constitution, and several of their comrades played influential roles in the new regime. In the United States many of the networks and mutual aid initiatives that engaged with the Flores Magón brothers continued to bear fruit, well into the emergence of the Chicana/o Movement.

Article

Ken Hirschkop

The concept of “heteroglossia” was coined by Mikhail Bakhtin in an essay from the 1930s. Heteroglossia was the name he gave for the “inner stratification of a single national language into social dialects, group mannerisms, professional jargons, generic languages, the languages of generations and age-groups,” and so on, but it was not simply another term for the linguistic variation studied in sociolinguistics and dialectology. It differed in three respects. First, in heteroglossia differences of linguistic form coincided with differences in social significance and ideology: heteroglossia was stratification into “socio-ideological languages,” which were “specific points of view on the world, forms for its verbal interpretation.” Second, heteroglossia embodied the force of what Bakhtin called “historical becoming.” In embodying a point of view or “social horizon,” language acquired an orientation to the future, an unsettled historical intentionality, it otherwise lacked. Third, heteroglossia was a subaltern practice, concentrated in a number of cultural forms, all of which took a parodic, ironizing stance in relation to the official literary language that dominated them. Throughout his discussion, however, Bakhtin wavers between claiming this heteroglossia exists as such in the social world, from which the novel picks it up, and arguing that heteroglossia is something created and institutionalized by novels, which take the raw material of variation and rework it into “images of a language.” Interestingly, from roughly 2000 on work in sociolinguistics has suggested that ordinary speakers do the kind of stylizing and imaging work Bakhtin assigned to the novel alone. One could argue, however, that heteroglossia only acquires its full significance and force when it is freed from any social function and allowed to flourish in novels. According to Bakhtin, that means that heteroglossia is only possible in modernity, because it is in modernity that society becomes truly historical, and languages only acquire their orientation to the future in those circumstances.

Article

Despite Latinxs being the largest growing demographic in the United States, their experiences and identities continue to be underrepresented and misrepresented in the mainstream pop cultural imaginary. However, for all the negative stereotypes and restrictive ways that the mainstream boxes in Latinxs, Latinx musicians, writers, artists, comic book creators, and performers actively metabolize all cultural phenomena to clear positive spaces of empowerment and to make new perception, thought, and feeling about Latinx identities and experiences. It is important to understand, though, that Latinxs today consume all variety of cultural phenomena. For corporate America, therefore, the Latinx demographic represents a huge buying demographic. Viewed through cynical and skeptical eyes, increased representation of Latinxs in mainstream comic books and film results from this push to capture the Latinx consumer market. Within mainstream comic books and films, Latinx subjects are rarely the protagonists. However, Latinx comic book and film creators are actively creating Latinx protagonists within richly rendered Latinx story worlds. Latinx comic book and film creators work in all the storytelling genres and modes (realism, sci-fi, romance, memoir, biography, among many others) to clear new spaces for the expression of Latinx subjectivities and experiences.

Article

Lee Morrissey

Literacy is a measure of being literate, of the ability to read and write. The central activity of the humanities—its shared discipline—literacy has become one of its most powerful and diffuse metaphors, becoming a broadly applied metaphor representing a fluency, a competency, or a skill in manipulating information. The word “literacy” is of recent coinage, being little more than a century old. Reading and writing, or effectively using letters (the word at the root of literacy), are ancient skills, but the word “literacy” likely springs from and reflects the emergence of mass public education at the end of the 19th and the turn of the 20th century. In this sense, then “literacy” measures personal and demographic development. Literacy is mimetic. It is synesthetic—in some languages, it means hearing sounds (the phonemes) in what is seen (the letters); in others, it means linking a symbol to the thing symbolized. Although a recent word, “literacy” depends upon the emergence of symbolic sign systems in ancient times. Written symbolic systems, by contrast, are relatively recent developments in human history. But they bear a more complicated relationship to the spoken language, being in part a representation of it (and thus a recording of its contents) while also offering a representation of the world, the referent: that is, literacy involves an awareness of the representation of the world. Reading and writing are tied to millennia of changes in technologies of representation. As a term denoting fluidity with letters, literacy has a history and a geography that follow the development and movement of a phonetic alphabetic and subsequent systems of writing. If the alphabet encodes a shift from orality to literacy, HTML encodes a shift from verbal literacy to a kind of numerical literacy not yet theorized.

Article

Dirk Van Hulle

The study of modern manuscripts to examine writing processes is termed “genetic criticism.” A current trend that is sometimes overdramatized as “the archival turn” is a result of renewed interest in this discipline, which has a long tradition situated at the intersection between modern book history, bibliography, textual criticism, and scholarly editing. Handwritten documents are called “modern” manuscripts to distinguish them from medieval or even older manuscripts. Whereas most extant medieval manuscripts are scribal copies and fit into a context of textual circulation and dissemination, modern manuscripts are usually autographs for private use. Traditionally, the watershed between older and “modern” manuscripts is situated around the middle of the 18th century, coinciding with the rise of the so-called Geniezeit, the Sturm und Drang (Storm and Stress) period in which the notion of “genius” became fashionable. Authors such as Goethe carefully preserved their manuscripts. This new interest in authors’ manuscripts can be part of the “genius” ideology: since a draft was regarded as the trace of a thought process, a manuscript was the tangible evidence of capital-G “Genius” at work. But this division between modern and older manuscripts needs to be nuanced, for there are of course autograph manuscripts with cancellations and revisions from earlier periods, which are equally interesting for manuscript research. Genetic criticism studies the dynamics of creative processes, discerning a difference between the part of the genesis that takes place in the author’s private environment and the continuation of that genesis after the work has become public. But the genesis is often not a linear development “before” and “after” publication; rather, it can be conceptualized by means of a triangular model. The three corners of that model are endogenesis (the “inside” of a writing process, the writing of drafts), exogenesis (the relation to external sources of inspiration), and epigenesis (the continuation of the genesis and revision after publication). At any point in the genesis there is the possibility that exogenetic material may color the endo- or the epigenesis. In the digital age, archival literary documents are no longer coterminous with a material object. But that does not mean the end of genetic criticism. On the contrary, an exciting future lies ahead. Born-digital works require new methods of analysis, including digital forensics, computer-assisted collation, and new forms of distant reading. The challenge is to connect to methods of digital text analysis by finding ways to enable macroanalysis across versions.

Article

For almost four decades, from 1936 to 1972, the director of the Federal Bureau of Investigation, J. Edgar Hoover, fueled by intense paranoia and fear, hounded and relentlessly pursued a variety of American writers and publishers in a staunch effort to control the dissemination of literature that he thought threatened the American way of life. In fact, beginning as early as the Red Scare of 1919, he managed to control literary modernism by bullying and harassing writers and artists at a time when the movement was spreading quickly in the hands of an especially young, vibrant collection of international writers, editors, and publishers. He, his special agents in charge, and their field agents worked to manipulate the relationship between state power and modern literature, thereby “federalizing,” to a point, political surveillance. There still seems to be a resurgence of brute state force that is omnipresent and going through all matters and aspects of our private lives. We are constantly under surveillance, tracked, and monitored when engaged in even the most mundane activities. The only way to counter our omnipresent state surveillance is to monitor the monitors themselves.

Article

Posthumous publication is part of a long-standing literary tradition that crosses centuries and continents, giving works of art ranging from The Canterbury Tales to The Diary of Anne Frank, from Northanger Abbey to 2666. Preparing for print work that was incomplete and unpublished at the time of the author’s death, posthumous editing is a type of public and goal-oriented grieving that seeks to establish or preserve the legacy of a writer no longer able to establish it for herself. Surrounding the work of posthumous editing are questions of authorial intent, editorial and publisher imperative, and reader response, each shaping the degree to which a posthumously published edition of a text is considered valuable. The visibility of the work of such editing spans from conspicuously absent to noticeably transformative, suggesting a wide range of possibilities for imagining the editorial role in producing the posthumous text. Examples drawn from 20th- and 21st-century US literature reveal the nature of editorial relationships to the deceased as well as the subsequent relationships of readers to the posthumously published text.

Article

Print culture refers to the production, distribution, and reception of printed material. It includes the concepts of authorship, readership, and impact and entails the intersection of technological, political, religious, legal, social, educational, and economic practices, all of which can vary from one cultural context to another. Prior to their arrival in the Americas, Spain and Portugal had their own print culture and, following the conquest, they introduced it into their colonies, first through the importation of books from Europe and later following the establishment of the printing press in Mexico in 1539. Throughout the colonial period, the importation of books from abroad was a constant and lucrative practice. However, print culture was not uniform. As in Europe, print culture in Latin America was largely an urban phenomenon, with restricted readership due to high rates of illiteracy, which stemmed from factors of class, gender, race, and income, among others. Furthermore, the press itself spread slowly and unevenly, according to the circumstances of each region. One thing, however, that these territories had in common was widespread censorship. Reading, writing, and printing were subject to oversight by the Inquisition, whose responsibility was to police the reading habits of the populace and to ensure that no texts were printed that could disrupt the political and religious well-being of the colonies, as they defined it. In spite of Inquisitorial restrictions, print culture flourished and the number and kind of materials available increased dramatically until the early 19th century, when most of the territories under the Iberian monarchies became independent, a phenomenon due in part to the circulation of Enlightenment thought in the region. Following the era of revolutions, newly established republics attempted to implement freedom of the press. While the Inquisition no longer existed, censorship continued to be practiced to a greater or lesser degree, depending on the circumstances and who was in power. This also applies to Cuba and Puerto Rico. Immediately prior to Latin American independence, the United States became a sovereign nation. Commercial and cultural exchanges, including print materials, between the United States and Latin America increased, and many Latin Americans were traveling to and residing in the United States for extended periods. However, it was also in this period that the United States began a campaign of expansionism that did not cease until 1898 and resulted in the acquisition of half of Mexico’s national territory and of Spain’s remaining American colonies, Cuba and Puerto Rico. In addition to the land itself, the United States also “acquired” the people who had been Spanish and Mexican citizens in California, the Southwest, and Puerto Rico. With this change in sovereignty came a change in language, customs, and demographics, which provoked a cultural crisis among these new Latina/o citizens. To defend themselves against the racial persecution from Anglo-Americans and to reverse the impending annihilation of their culture and language, they turned to the press. The press allowed Latinas/os a degree of cultural autonomy, even as their position was slowly eroded by legal and demographic challenges as the 19th century progressed.

Article

Lutz Koepnick

Digital reading has been an object of fervent scholarly and public debates since the mid-1990s. Often digital reading has been associated solely with what may happen between readers and screens, and in dominant approaches digital reading devices have been seen as producing radically different readers than printed books produce. Far from merely reducing digital reading to a mere matter of what e-books might do to the attention spans of individual readers, however, contemporary critiques emphasize how digital computing affects and is being affected by neurological, sensory, kinetic, and apparatical processes. The future of reading has too many different aspects to be discussed by scholars of one discipline or field of study alone. Digital reading is as much a matter for neurologists as for literary scholars, for engineers as much as ergonomicians, for psychologists, physiologists, media historians, art critics, critical theorists, and many others. Scholars of literature will need to consult many fields to elaborate a future poetics of digital reading and examine how literary texts in all their different forms are and will be met by 21st-century readers.

Article

DeNel Rehberg Sedo

The digital era offers a plethora of opportunities for readers to exchange opinions, share reading recommendations, and form ties with other readers. This communication often takes place in online environments, which presents reading researchers with new opportunities and challenges when investigating readers’ reading experiences. What readers do with what they read is not a new topic of scholarly debate. As early as the 14th century, when scribes questioned how their readers understood their words, readers have been scrutinized. Contemporary reading investigations and theory formation began in earnest in the 1920s with I. A. Richards’s argument that the reader should be considered separate from the text. In the 1930s, Louise Rosenblatt furthered the discipline, using literature as an occasion for collective inquiry into both cultural and individual values and introducing the concerns for the phenomenological experience of reading and its intersubjectivity. While there is no universal theory of how readers read, more recent scholarly discourse illustrates a cluster of related views that see the reader and the text as complementary to one another in a variety of critical contexts. With the advent of social media and Web 2.0, readers provide researchers with a host of opportunities to not only identify who they are, but to access in profound ways their individual and collective responses to the books they read. Reader responses on the Internet’s early email forums, or the contemporary iterations of browser-hosted groups such as Yahoo Groups or Google Groups, alongside book talk found on platforms such as Twitter, Facebook, and YouTube, present data that can be analyzed through established or newly developed digital methods. Reviews and commentary on these platforms, in addition to the thousands of book blogs, Goodreads.com, LibraryThing.com, and readers’ reviews on bookseller websites illustrate cultural, economic, and social aspects of reading in ways that previously were often elusive to reading researchers. Contemporary reading scholars bring to the analytical mix perspectives that enrich last century’s theories of unidentified readers. The methods illustrate the fertility available to contemporary investigations of readers and their books. Considered together, they allow scholars to contemplate the complexities of reading in the past, highlight the uniqueness of reading in the present, and provide material to help project into the future.

Article

Reception-oriented literary theory, history, and criticism, all analyze the processes by which literary texts are received, both in the moment of their first publication and long afterwards: how texts are interpreted, appropriated, adapted, transformed, passed on, canonized, and/or forgotten by various audiences. Reception draws on multiple methodologies and approaches including semiotics and deconstruction; ethnography, sociology, and history; media theory and archaeology; and feminist, Marxist, black, and postcolonial criticism. Studying reception gives us insights into the texts themselves and their possible range of meanings, uses, and value; into the interpretative regimes of specific historical periods and cultural milieux; and into the nature of linguistic meaning and communication.

Article

Mark Byron

Textual studies describes a range of fields and methodologies that evaluate how texts are constituted both physically and conceptually, document how they are preserved, copied, and circulated, and propose ways in which they might be edited to minimize error and maximize the text’s integrity. The vast temporal reach of the history of textuality—from oral traditions spanning thousands of years and written forms dating from the 4th millenium bce to printed and digital text forms—is matched by its geographical range covering every linguistic community around the globe. Methods of evaluating material text-bearing documents and the reliability of their written or printed content stem from antiquity, often paying closest attention to sacred texts as well as to legal documents and literary works that helped form linguistic and social group identity. With the incarnation of the printing press in the early modern West, the rapid reproduction of text matter in large quantities had the effect of corrupting many texts with printing errors as well as providing the technical means of correcting such errors more cheaply and quickly than in the preceding scribal culture. From the 18th century, techniques of textual criticism were developed to attempt systematic correction of textual error, again with an emphasis on scriptural and classical texts. This “golden age of philology” slowly widened its range to consider such foundational medieval texts as Dante’s Commedia as well as, in time, modern vernacular literature. The technique of stemmatic analysis—the establishment of family relationships between existing documents of a text—provided the means for scholars to choose between copies of a work in the pursuit of accuracy. In the absence of original documents (manuscripts in the hand of Aristotle or the four Evangelists, for example) the choice between existing versions of a text were often made eclectically—that is, drawing on multiple versions—and thus were subject to such considerations as the historic range and geographical diffusion of documents, the systematic identification of common scribal errors, and matters of translation. As the study of modern languages and literatures consolidated into modern university departments in the later 19th century, new techniques emerged with the aim of providing reliable literary texts free from obvious error. This aim had in common with the preceding philological tradition the belief that what a text means—discovered in the practice of hermeneutics—was contingent on what the text states—established by an accurate textual record that eliminates error by means of textual criticism. The methods of textual criticism took several paths through the 20th century: the Anglophone tradition centered on editing Shakespeare’s works by drawing on the earliest available documents—the printed Quartos and Folios—developing into the Greg–Bowers–Tanselle copy-text “tradition” which was then deployed as a method by which to edit later texts. The status of variants in modern literary works with multiple authorial manuscripts—not to mention the existence of competing versions of several of Shakespeare’s plays—complicated matters sufficiently that editors looked to alternate editorial models. Genetic editorial methods draw in part on German editorial techniques, collating all existing manuscripts and printed texts of a work in order to provide a record of its composition process, including epigenetic processes following publication. The French methods of critique génétique also place the documentary record at the center, where the dossier is given priority over any one printed edition, and poststructuralist theory is used to examine the process of “textual invention.” The inherently social aspects of textual production—the author’s interaction with agents, censors, publishers, and printers and the way these interactions shape the content and presentation of the text—have reconceived how textual authority and variation are understood in the social and economic contexts of publication. And, finally, the advent of digital publication platforms has given rise to new developments in the presentation of textual editions and manuscript documents, displacing copy-text editing in some fields such as modernism studies in favor of genetic or synoptic models of composition and textual production.