Anthologies, in the broadest sense of collections of independent texts, have always played an important role in preserving and spreading the written word, and collections of short forms, such as proverbs, wise sayings, and epigraphs, have a long history. The literary anthology, however, is of comparatively recent provenance, having come to prominence only during the long 18th century, when the modern concept of “literature” itself emerged. Since that time, it has been a fundamental part of literary culture: not only have literary texts been published in anthologies, but also the genre of the anthology has done much to shape their form and content, and to influence the ways in which they are read and taught, particularly as literary criticism has developed in tandem with the rise of the anthology. The anthology has also stimulated innovation in many periods and places by providing a model for writers of different genres of literature to emulate, and it has been argued that the form of the novel is much indebted to the anthology. This is connected to its close association with the figure of the reader. Furthermore, anthologies have helped to define what literature is, and been crucial to the canonization of texts, authors, and genres, and the consolidation of literary traditions. It is therefore not surprising that they were at the heart of the theoretical and pedagogical debates within literary studies known as the canon wars, which raged during the 1980s and 1990s. In this role, they contributed much to discussions concerning the theories and politics of identity, and to such approaches as feminism and race studies. The connection between the anthology and literary theory extends beyond this, however: theory itself has been subject to widespread anthologization, which has affected its practice and reception; the form of theoretical writing can in certain respects be understood as anthological; and the anthology is an important object of theoretical attention. For instance, given the potential which the digital age holds to transform how texts are disseminated and consumed, and the importance of finding ways to classify and navigate the digital archive, anthology studies is likely to figure largely in the Digital Humanities.
First known as a kephalaion in Greek, capitulum or caput in Latin, the chapter arose in antiquity as a finding device within long, often heterogenous prose texts, prior even to the advent of the codex. By the 4th century
Copyright gives an author control over the presentation of her work. Economic rights afford control over copies, and the noneconomic rights known as moral rights afford control over changes. An author’s moral rights remain with her even after she sells her economic rights in copyright. The excessive control that copyright offers to copyright owners may be limited by cementing these authorial rights, for all authors.
Some elements of copyright law allow the meaning of a work as perceived by its audience to develop and evolve. The strengthening of that support by extending rights to the public will further restrict copyright’s excesses.
Niels Ole Finnemann
Electronic text can be defined on two different, though interconnected, levels. On the one hand, electronic text can be defined by taking the notion of “text” or “printed text” as the point of departure. On the other hand, electronic text can be defined by taking the digital format as the point of departure, where everything is represented in the binary alphabet. While the notion of text in most cases lends itself to being independent of medium and embodiment, it is also often tacitly assumed that it is in fact modeled on the print medium, instead of, for instance, on hand-written text or speech. In late 20th century, the notion of “text” was subjected to increasing criticism, as can be seen in the question that has been raised in literary text theory about whether “there is a text in this class.” At the same time, the notion was expanded by including extralinguistic sign modalities (images, videos). A basic question, therefore, is whether electronic text should be included in the enlarged notion that text is a new digital sign modality added to the repertoire of modalities or whether it should be included as a sign modality that is both an independent modality and a container that can hold other modalities. In the first case, the notion of electronic text would be paradigmatically formed around the e-book, which was conceived as a digital copy of a printed book but is now a deliberately closed work. Even closed works in digital form will need some sort of interface and hypertextual navigation that together constitute a particular kind of paratext needed for accessing any sort of digital material.
In the second case, the electronic text is defined by the representation of content and (some parts of the) processing rules as binary sequences manifested in the binary alphabet. This wider notion would include, for instance, all sorts of scanning results, whether of the outer cosmos or the interior of our bodies and of digital traces of other processes in-between (machine readings included). Since other alphabets, such as the genetic alphabet and all sorts of images may also be represented in the binary alphabet, such materials will also belong to the textual universe within this definition. A more intriguing implication is that born-digital materials may also include scripts and interactive features as intrinsic parts of the text.
The two notions define the text on different levels: one is centered on the Latin, the other on the binary alphabet, and both definitions include hypertext, interactivity, and multimodality as constituent parameters. In the first case, hypertext is included as a navigational, paratextual device; whereas in the second case, hypertext is also incorporated in the narrative within an otherwise closed work or as a constituent element on the textual universe of the web, where it serves the ongoing production of (possibly scripted) connections and disconnections between blocks of textual content. Since the early decades of early 21st century still represent only the very early stages of the globally distributed universe of web texts, this is also a history of the gradual unfolding of the dimensions of these three constituencies—hypertext, interactivity, and multimodality. The result is a still-expanding repertoire of genres, including some that are emerging via path dependency; some via remediation; and some as new genres that are unique for networked digital media, including “social media texts” and a growing variety of narrative and discursive multiple-source systems.
Luis A. Marentes
Early critics of the Porfirio Díaz regime and editors of the influential newspaper Regeneración, Ricardo and Enrique Flores Magón escaped to the United States in 1904. Here, with Ricardo as the leader and most prolific writer, they founded the Partido Liberal Mexicano (PLM) in 1906 and facilitated oppositional transnational networks of readers, political clubs, and other organizations. From their arrival they were constantly pursued and imprisoned by coordinated Mexican and US law enforcement and private detective agencies, but their cause gained US radical and worker support. With the outbreak of the 1910 Mexican Revolution the PLM splintered, with many members joining Madero’s forces, while the Flores Magón brothers and the PLM nucleus refused to compromise. They had moved beyond a liberal critique of a dictatorship to an anarchist oppositional stance to the state and private property. While not called Magonismo at the time, their ideological and organizational principles left a legacy in both Mexico and the United States greatly associated with the brothers. During World War I, a time of a growing nativist red scare in the United States, they turned from a relative nuisance to a foreign radical threat to US authorities. Ricardo died in Leavenworth federal penitentiary in 1922 and Enrique was deported to Mexico, where he promoted the brothers’ legacy within the postrevolutionary order. Although the PLM leadership opposed the new regime, their 1906 Program inspired much of the 1917 Constitution, and several of their comrades played influential roles in the new regime. In the United States many of the networks and mutual aid initiatives that engaged with the Flores Magón brothers continued to bear fruit, well into the emergence of the Chicana/o Movement.
Frederick Luis Aldama
Despite Latinxs being the largest growing demographic in the United States, their experiences and identities continue to be underrepresented and misrepresented in the mainstream pop cultural imaginary. However, for all the negative stereotypes and restrictive ways that the mainstream boxes in Latinxs, Latinx musicians, writers, artists, comic book creators, and performers actively metabolize all cultural phenomena to clear positive spaces of empowerment and to make new perception, thought, and feeling about Latinx identities and experiences. It is important to understand, though, that Latinxs today consume all variety of cultural phenomena. For corporate America, therefore, the Latinx demographic represents a huge buying demographic. Viewed through cynical and skeptical eyes, increased representation of Latinxs in mainstream comic books and film results from this push to capture the Latinx consumer market. Within mainstream comic books and films, Latinx subjects are rarely the protagonists. However, Latinx comic book and film creators are actively creating Latinx protagonists within richly rendered Latinx story worlds. Latinx comic book and film creators work in all the storytelling genres and modes (realism, sci-fi, romance, memoir, biography, among many others) to clear new spaces for the expression of Latinx subjectivities and experiences.
Literacy is a measure of being literate, of the ability to read and write. The central activity of the humanities—its shared discipline—literacy has become one of its most powerful and diffuse metaphors, becoming a broadly applied metaphor representing a fluency, a competency, or a skill in manipulating information. The word “literacy” is of recent coinage, being little more than a century old. Reading and writing, or effectively using letters (the word at the root of literacy), are ancient skills, but the word “literacy” likely springs from and reflects the emergence of mass public education at the end of the 19th and the turn of the 20th century. In this sense, then “literacy” measures personal and demographic development. Literacy is mimetic. It is synesthetic—in some languages, it means hearing sounds (the phonemes) in what is seen (the letters); in others, it means linking a symbol to the thing symbolized. Although a recent word, “literacy” depends upon the emergence of symbolic sign systems in ancient times. Written symbolic systems, by contrast, are relatively recent developments in human history. But they bear a more complicated relationship to the spoken language, being in part a representation of it (and thus a recording of its contents) while also offering a representation of the world, the referent: that is, literacy involves an awareness of the representation of the world. Reading and writing are tied to millennia of changes in technologies of representation. As a term denoting fluidity with letters, literacy has a history and a geography that follow the development and movement of a phonetic alphabetic and subsequent systems of writing. If the alphabet encodes a shift from orality to literacy, HTML encodes a shift from verbal literacy to a kind of numerical literacy not yet theorized.
Dirk Van Hulle
The study of modern manuscripts to examine writing processes is termed “genetic criticism.” A current trend that is sometimes overdramatized as “the archival turn” is a result of renewed interest in this discipline, which has a long tradition situated at the intersection between modern book history, bibliography, textual criticism, and scholarly editing. Handwritten documents are called “modern” manuscripts to distinguish them from medieval or even older manuscripts. Whereas most extant medieval manuscripts are scribal copies and fit into a context of textual circulation and dissemination, modern manuscripts are usually autographs for private use. Traditionally, the watershed between older and “modern” manuscripts is situated around the middle of the 18th century, coinciding with the rise of the so-called Geniezeit, the Sturm und Drang (Storm and Stress) period in which the notion of “genius” became fashionable. Authors such as Goethe carefully preserved their manuscripts. This new interest in authors’ manuscripts can be part of the “genius” ideology: since a draft was regarded as the trace of a thought process, a manuscript was the tangible evidence of capital-G “Genius” at work. But this division between modern and older manuscripts needs to be nuanced, for there are of course autograph manuscripts with cancellations and revisions from earlier periods, which are equally interesting for manuscript research. Genetic criticism studies the dynamics of creative processes, discerning a difference between the part of the genesis that takes place in the author’s private environment and the continuation of that genesis after the work has become public. But the genesis is often not a linear development “before” and “after” publication; rather, it can be conceptualized by means of a triangular model. The three corners of that model are endogenesis (the “inside” of a writing process, the writing of drafts), exogenesis (the relation to external sources of inspiration), and epigenesis (the continuation of the genesis and revision after publication). At any point in the genesis there is the possibility that exogenetic material may color the endo- or the epigenesis. In the digital age, archival literary documents are no longer coterminous with a material object. But that does not mean the end of genetic criticism. On the contrary, an exciting future lies ahead. Born-digital works require new methods of analysis, including digital forensics, computer-assisted collation, and new forms of distant reading. The challenge is to connect to methods of digital text analysis by finding ways to enable macroanalysis across versions.
Claire A. Culleton
For almost four decades, from 1936 to 1972, the director of the Federal Bureau of Investigation, J. Edgar Hoover, fueled by intense paranoia and fear, hounded and relentlessly pursued a variety of American writers and publishers in a staunch effort to control the dissemination of literature that he thought threatened the American way of life. In fact, beginning as early as the Red Scare of 1919, he managed to control literary modernism by bullying and harassing writers and artists at a time when the movement was spreading quickly in the hands of an especially young, vibrant collection of international writers, editors, and publishers. He, his special agents in charge, and their field agents worked to manipulate the relationship between state power and modern literature, thereby “federalizing,” to a point, political surveillance. There still seems to be a resurgence of brute state force that is omnipresent and going through all matters and aspects of our private lives. We are constantly under surveillance, tracked, and monitored when engaged in even the most mundane activities. The only way to counter our omnipresent state surveillance is to monitor the monitors themselves.
Posthumous publication is part of a long-standing literary tradition that crosses centuries and continents, giving works of art ranging from The Canterbury Tales to The Diary of Anne Frank, from Northanger Abbey to 2666. Preparing for print work that was incomplete and unpublished at the time of the author’s death, posthumous editing is a type of public and goal-oriented grieving that seeks to establish or preserve the legacy of a writer no longer able to establish it for herself. Surrounding the work of posthumous editing are questions of authorial intent, editorial and publisher imperative, and reader response, each shaping the degree to which a posthumously published edition of a text is considered valuable. The visibility of the work of such editing spans from conspicuously absent to noticeably transformative, suggesting a wide range of possibilities for imagining the editorial role in producing the posthumous text. Examples drawn from 20th- and 21st-century US literature reveal the nature of editorial relationships to the deceased as well as the subsequent relationships of readers to the posthumously published text.
Print Culture and Censorship from Colonial Latin America to the US Latina/o Presence in the 19th Century
Matthew J. K. Hill
Print culture refers to the production, distribution, and reception of printed material. It includes the concepts of authorship, readership, and impact and entails the intersection of technological, political, religious, legal, social, educational, and economic practices, all of which can vary from one cultural context to another. Prior to their arrival in the Americas, Spain and Portugal had their own print culture and, following the conquest, they introduced it into their colonies, first through the importation of books from Europe and later following the establishment of the printing press in Mexico in 1539. Throughout the colonial period, the importation of books from abroad was a constant and lucrative practice. However, print culture was not uniform. As in Europe, print culture in Latin America was largely an urban phenomenon, with restricted readership due to high rates of illiteracy, which stemmed from factors of class, gender, race, and income, among others. Furthermore, the press itself spread slowly and unevenly, according to the circumstances of each region. One thing, however, that these territories had in common was widespread censorship. Reading, writing, and printing were subject to oversight by the Inquisition, whose responsibility was to police the reading habits of the populace and to ensure that no texts were printed that could disrupt the political and religious well-being of the colonies, as they defined it. In spite of Inquisitorial restrictions, print culture flourished and the number and kind of materials available increased dramatically until the early 19th century, when most of the territories under the Iberian monarchies became independent, a phenomenon due in part to the circulation of Enlightenment thought in the region. Following the era of revolutions, newly established republics attempted to implement freedom of the press. While the Inquisition no longer existed, censorship continued to be practiced to a greater or lesser degree, depending on the circumstances and who was in power. This also applies to Cuba and Puerto Rico. Immediately prior to Latin American independence, the United States became a sovereign nation. Commercial and cultural exchanges, including print materials, between the United States and Latin America increased, and many Latin Americans were traveling to and residing in the United States for extended periods. However, it was also in this period that the United States began a campaign of expansionism that did not cease until 1898 and resulted in the acquisition of half of Mexico’s national territory and of Spain’s remaining American colonies, Cuba and Puerto Rico. In addition to the land itself, the United States also “acquired” the people who had been Spanish and Mexican citizens in California, the Southwest, and Puerto Rico. With this change in sovereignty came a change in language, customs, and demographics, which provoked a cultural crisis among these new Latina/o citizens. To defend themselves against the racial persecution from Anglo-Americans and to reverse the impending annihilation of their culture and language, they turned to the press. The press allowed Latinas/os a degree of cultural autonomy, even as their position was slowly eroded by legal and demographic challenges as the 19th century progressed.
Digital reading has been an object of fervent scholarly and public debates since the mid-1990s. Often digital reading has been associated solely with what may happen between readers and screens, and in dominant approaches digital reading devices have been seen as producing radically different readers than printed books produce.
Far from merely reducing digital reading to a mere matter of what e-books might do to the attention spans of individual readers, however, contemporary critiques emphasize how digital computing affects and is being affected by neurological, sensory, kinetic, and apparatical processes. The future of reading has too many different aspects to be discussed by scholars of one discipline or field of study alone. Digital reading is as much a matter for neurologists as for literary scholars, for engineers as much as ergonomicians, for psychologists, physiologists, media historians, art critics, critical theorists, and many others. Scholars of literature will need to consult many fields to elaborate a future poetics of digital reading and examine how literary texts in all their different forms are and will be met by 21st-century readers.
DeNel Rehberg Sedo
The digital era offers a plethora of opportunities for readers to exchange opinions, share reading recommendations, and form ties with other readers. This communication often takes place in online environments, which presents reading researchers with new opportunities and challenges when investigating readers’ reading experiences.
What readers do with what they read is not a new topic of scholarly debate. As early as the 14th century, when scribes questioned how their readers understood their words, readers have been scrutinized. Contemporary reading investigations and theory formation began in earnest in the 1920s with I. A. Richards’s argument that the reader should be considered separate from the text. In the 1930s, Louise Rosenblatt furthered the discipline, using literature as an occasion for collective inquiry into both cultural and individual values and introducing the concerns for the phenomenological experience of reading and its intersubjectivity. While there is no universal theory of how readers read, more recent scholarly discourse illustrates a cluster of related views that see the reader and the text as complementary to one another in a variety of critical contexts.
With the advent of social media and Web 2.0, readers provide researchers with a host of opportunities to not only identify who they are, but to access in profound ways their individual and collective responses to the books they read. Reader responses on the Internet’s early email forums, or the contemporary iterations of browser-hosted groups such as Yahoo Groups or Google Groups, alongside book talk found on platforms such as Twitter, Facebook, and YouTube, present data that can be analyzed through established or newly developed digital methods. Reviews and commentary on these platforms, in addition to the thousands of book blogs, Goodreads.com, LibraryThing.com, and readers’ reviews on bookseller websites illustrate cultural, economic, and social aspects of reading in ways that previously were often elusive to reading researchers.
Contemporary reading scholars bring to the analytical mix perspectives that enrich last century’s theories of unidentified readers. The methods illustrate the fertility available to contemporary investigations of readers and their books. Considered together, they allow scholars to contemplate the complexities of reading in the past, highlight the uniqueness of reading in the present, and provide material to help project into the future.