The concept of “text” is ambiguous: it can identify at the same time a concrete reality and an abstract one. Indeed, text presents itself both as an empirical object subject to analysis and an abstract object constructed by the analysis itself. This duplicity characterizes the development of the concept in the 20th century. According to different theories of language, there are also different understandings of “text”: a restricted use as written text, an extensive use as written and spoken text, and an expanded use as any written, verbal, gestural, or visual manifestation. The concept of “text” also presupposes two other concepts: from a generative point of view, it involves a proceeding by which something becomes a text (textualization); from an interpretative point of view, it involves a proceeding by which something can be interpreted as a text (textuality). In textual linguistics, “text” is considered at the same time as an abstract object, issued from a specific theoretical approach, and a concrete object, a linguistic phenomenon starting the process of analysis. In textual linguistics, textuality presents as a global quality of text issued from the interlacing of the sentences composing it. In linguistics, the definition of textuality depends on the definition of text. For instance, M. A. K. Halliday and Ruqaiya Hasan define textuality through the concepts of “cohesion” and “coherence.” Cohesion is a necessary condition of textuality, because it enables text to be perceived as a whole, but it’s not sufficient to explain it. In fact, to be interpreted as a whole, the elements composing the text need to be coherent to each other. But according to Robert-Alain De Beaugrande and Wolfgang Ulrich Dressler, cohesion and coherence are only two of the seven principles of textuality (the other five being intentionality, acceptability, informativity, situationality, and intertextuality). Textual pragmatics deals with a more complex problem: that of the text conceived as an empirical object. Here the text is presented as a unit captured in a communication process, “a communicative unit.” Considered from a pragmatic point of view, every single unit composing a text constitutes an instruction for meaning. Since the 1970s, analyzing connections between texts and contexts, textual pragmatics, has been an important source of inspiration for textual semiotics. In semiotics, the theory of language proposed by Louis T. Hjelmslev, the concept of “text” is conceived above all as a process and a “relational hierarchy.” Furthermore, according to Hjelmslev, textuality consists in the idea of “mutual dependencies,” composing a whole which makes the text an “absolute totality” to be interpreted by readers and analyzed by linguists. Since texts are composed of a network of connections at both local and global levels, their analyses depend on the possibility to reconstruct the relation between global and local dimensions. For this reason, François Rastier suggests that in order to capture the meaning of a text, the semantic analysis must identify semantic forms at different semantic levels. So textuality comes from the articulation between the semantic and phemic forms (content and expression), and from the semantic and phemic roots from which the forms emerge. Textuality allows the reader to identify the interpretative paths through which to understand the text. This complex dynamic is at the foundation of this idea of textuality. Now that digital texts are available, researchers have developed several methods and tools to exploit such digital texts and discourse, representing at the same time different approaches to meaning. Text Mining is based on a simple principle: the identification and processing of textual contents to extract knowledge. By using digital tools, the intra-textual and inter-textual links can be visualized on the screen, as lists or tables of results, which permits the analysis of the occurrences and frequency of certain textual elements composing the digital texts. So, another idea of text is visible to the linguist: not the classical one according to the culture of printed texts, but a new one typical of the culture of digital texts, and their textuality.
Rossana De Angelis
Posthumous publication is part of a long-standing literary tradition that crosses centuries and continents, giving works of art ranging from The Canterbury Tales to The Diary of Anne Frank, from Northanger Abbey to 2666. Preparing for print work that was incomplete and unpublished at the time of the author’s death, posthumous editing is a type of public and goal-oriented grieving that seeks to establish or preserve the legacy of a writer no longer able to establish it for herself. Surrounding the work of posthumous editing are questions of authorial intent, editorial and publisher imperative, and reader response, each shaping the degree to which a posthumously published edition of a text is considered valuable. The visibility of the work of such editing spans from conspicuously absent to noticeably transformative, suggesting a wide range of possibilities for imagining the editorial role in producing the posthumous text. Examples drawn from 20th- and 21st-century US literature reveal the nature of editorial relationships to the deceased as well as the subsequent relationships of readers to the posthumously published text.
John D. Niles
The human capacity for oral communication is superbly well developed. While other animals produce meaningful sounds, most linguists agree that only human beings are possessed of true language, with its complex grammar. Moreover, only humans have the ability to tell stories, with their contrary-to-fact capabilities. This fact has momentous implications for the complexity of the oral communications that humans can produce, not just in conversation but also in a wide array of artistic genres. It is likewise true that only human beings enjoy the benefits of literacy; that is, only humans have developed technologies that enable the sounds of speech to be made visible and construed through one or another type of graphemic representation. Although orality is as innate to the human condition as is breathing or walking, competence in literacy requires training, and it has traditionally been the accomplishment of an educated elite. Correspondingly, the transmutation of oral art forms into writing—that is, the production of what can be called “oral literature”—is a relatively rare and special phenomenon compared with the ease with which people cultivate those art forms themselves. All the same, a large amount of the world’s recorded literature appears to be closely related to oral art forms, deriving directly from them in some instances. Literature of this kind is an oral/literary hybrid. It can fittingly be called “literature of the third domain,” for while it differs in character from literature produced in writing by well-educated people, the fact that it exists in writing distinguishes it from oral communication, even though it may closely resemble oral art forms in its stylized patterning. Understanding the nature of that hybridity requires an engagement not just with the dynamics of oral tradition but also with the processes by which written records of oral art forms are produced. In former days, this was through the cooperative efforts of speakers, scribes, and editors. Since the early 20th century, innovative technologies have opened up new possibilities of representation, not just through print but also through video and audio recordings that preserve a facsimile of the voice. Nevertheless, problems relating to the representation of oral art forms via other media are endemic to the category of oral literature and practically define it as such.
The diversity of scholarly contributions to the interdisciplinary fields of animal studies and posthumanism defies summation. As loosely assembled areas of inquiry, however, these fields contest the exceptionalist elevation of humans above animals on the basis of the latter’s alleged lack of language and reason, their exclusion from the political, their inability to experience pain or to understand death, and their absence of a moral sense of right and wrong. Posthumanism also stresses that species difference warrants an ethico-political attentiveness that eschews automatically reducing animals to figurative representations of gender, sexual, or racial difference. While theses hierarchies are no doubt sustained in part by exploiting the metaphorics of species difference, the urgency of dismantling the human/animal hierarchy has inclined animal studies and a number of cognate fields toward the literal, resulting in non-allegorical readings of texts by authors such as George Orwell, Henry David Thoreau, and Toni Morrison. This preference for literality is also shared by continental philosophers working in speculative realism and object-oriented ontology (OOO), as well as by literary critics who advance the enterprise of “surface reading,” which eschews the notion that texts contain “hidden meanings.” The nonhuman turn has emerged in conjunction with a preference for literality because posthumanism tends to stress immanence rather than transcendence. This ethos engenders a flattening effect that places humans, animals, plants, and things on same ontological level (OOO); resists interpreting literary animals in human terms (literary animal studies); and rejects the role of the critic as a hermeneutic decipherer of texts (surface reading). The “literal turn” thus poses a number of questions for literary theory. Literal meaning is definitionally uniform, but can univocal sense be maintained? In the 1960s, Jacques Derrida radicalized the Saussurian notion of the arbitrary nature of signs, arguing that the isolation of a literal or proper meaning presumes the arrival of signified that would escape the chain of signification. If proper meaning never fully is itself, however, then one can never determine what is properly literal or figurative. Metaphors are typically defined as figures of resemblance that transport the name of one thing to something else. But this definition remains fatally inadequate because “resemblance” itself is metaphoric. In addition to overlooking the equivocality of the terms “literal,” “metaphorical,” and “allegorical,” the literal turn also risks reducing interpretation to a volitional act: a practice of choosing among different available approaches over which the human governs. To what extent do readers who believe they are performing literal readings disavow textual agency: that is, the conditions that texts establish for their own reading? To apply to texts what are often too loosely called “methodologies” is always to find interpretative approaches foiled by textuality’s uncontrollable effects. Does the literal turn thus reinscribe the humanist subject insofar as it presumes the reader’s power to wrest control over the feral force of language? Does it ironically restore human mastery under the guise of surrendering it?
Dirk Van Hulle
The study of modern manuscripts to examine writing processes is termed “genetic criticism.” A current trend that is sometimes overdramatized as “the archival turn” is a result of renewed interest in this discipline, which has a long tradition situated at the intersection between modern book history, bibliography, textual criticism, and scholarly editing. Handwritten documents are called “modern” manuscripts to distinguish them from medieval or even older manuscripts. Whereas most extant medieval manuscripts are scribal copies and fit into a context of textual circulation and dissemination, modern manuscripts are usually autographs for private use. Traditionally, the watershed between older and “modern” manuscripts is situated around the middle of the 18th century, coinciding with the rise of the so-called Geniezeit, the Sturm und Drang (Storm and Stress) period in which the notion of “genius” became fashionable. Authors such as Goethe carefully preserved their manuscripts. This new interest in authors’ manuscripts can be part of the “genius” ideology: since a draft was regarded as the trace of a thought process, a manuscript was the tangible evidence of capital-G “Genius” at work. But this division between modern and older manuscripts needs to be nuanced, for there are of course autograph manuscripts with cancellations and revisions from earlier periods, which are equally interesting for manuscript research. Genetic criticism studies the dynamics of creative processes, discerning a difference between the part of the genesis that takes place in the author’s private environment and the continuation of that genesis after the work has become public. But the genesis is often not a linear development “before” and “after” publication; rather, it can be conceptualized by means of a triangular model. The three corners of that model are endogenesis (the “inside” of a writing process, the writing of drafts), exogenesis (the relation to external sources of inspiration), and epigenesis (the continuation of the genesis and revision after publication). At any point in the genesis there is the possibility that exogenetic material may color the endo- or the epigenesis. In the digital age, archival literary documents are no longer coterminous with a material object. But that does not mean the end of genetic criticism. On the contrary, an exciting future lies ahead. Born-digital works require new methods of analysis, including digital forensics, computer-assisted collation, and new forms of distant reading. The challenge is to connect to methods of digital text analysis by finding ways to enable macroanalysis across versions.
Literary stylistics is a practice of analyzing the language of literature using linguistic concepts and categories, with the goal of explaining how literary meanings are created by specific language choices and patterning, the linguistic foregrounding, in the text. While stylistics has periodically claimed to be objective, replicable, inspectable, falsifiable and rigorous, and thus quasi-scientific, subjective interpretation is an ineradicable element of such textual analysis. Nevertheless, the best stylistic analyses, which productively demonstrate direct relations between prominent linguistic forms and patterns in a text and the meanings or effects readers experience, are explicit in their procedures and argumentation, systematic, and testable by independent researchers. Stylistics is an interdiscipline situated between literary studies and linguistics, and from time to time has been shunned by both, who for decades predicted its decline if not disappearance. The opposite has happened; stylistics is flourishing, and some of its proponents argue that it offers more authentic and relevant literary studies than much of what goes on in university literature departments. Equally, some stylisticians see their work as a more coherent linguistics, adapted to a particular purpose, than much of the abstract linguistics pursued by academic linguists. In recent years, stylistics has been reanimated by adoption and adaptation of ideas sourced in cognitive linguistics and by the increasingly easy creation of huge corpora of languages in digital, machine-searchable form; these two developments have given rise to various forms of cognitive stylistics and corpus stylistics. In the early decades of the 21st century, one of the most exciting strands of work in stylistics is exploring kinds of iconicity in literary texts: passages of language that can be seen to enact or perform the effects or meanings the text is intent on conveying.
Philology—from the Greek words philologi’ā < philos “friend” and logos “word”—is a multi-faceted field of scholarship within the humanities which in its widest sense focuses on questions of time, history, and literature—with language as the common denominator. Philology is both an academic discipline—there is classical philology, Romance philology, Scandinavian philology, etc.—and a scholarly perspective on language, literature, and culture. The roots of philology go back all the way to the Library of Alexandria, Egypt, where philology began to evolve into a field of scholarship around 300 bce. In Alexandria, the foundations of philology were laid for centuries to come, for example as regards one of its major branches, textual criticism. A characteristic feature of philology past and present is that it focuses on texts in time from an interdisciplinary point of view, which is why philology as an umbrella term is relevant for many fields of scholarship in the 21st century. According to a traditional definition, a philologist is interested in the relationship between language and culture, and by means of language, he or she aims to understand the characteristics of the culture the language reflects. From this point of view, language is mainly a medium. In the analysis of (mostly very old) texts, a philologist often crosses disciplinary borders of different kinds—anthropology, archaeology, ethnology, folkloristics, history, etc.—and makes use of other special fields within manuscript studies, such as codicology (the archaeology of the book), diplomatics (the analysis of documents), paleography (the study of handwriting), philigranology (watermarks), and sphragistics (seals). For a philologist, texts and their languages and contents bear witness to past times, and the philologist’s perspective is often a wide one. The expertise of a philologist is the ability to analyze texts in their cultural-historical contexts, not only from a linguistic perspective (which is a prerequisite for a deep understanding of a text), but also from a cultural and historical perspective, and to explain the role of a text in its cultural-historical setting. In the course of history, philologists have made several contributions to our knowledge of ancient and medieval texts and writing, for example. In the 2010s, the focus in philology is for example on the so-called New Philology or Material Philology and digital philology, but the core of philology remains the same: philology is the art of reading slowly.
“Reading” is one of the most provocative terms in literary theory, in part because it connotes both an activity and a product: on the one hand, an effort to comprehend a text or object of knowledge, and on the other, a more formal response. Both senses of the term originate in the premise that literary and other cultural texts—including performances, scripted or not—require a more deliberative parsing than weather reports and recipes, or sentences like “rain is expected today” and “add one cup of flour.” At the same time, reading serves as an explanatory trope across various sites of 21st-century culture; in a tennis match, players “read” the strengths and weaknesses of their opponents and strategize accordingly; a cab driver “reads” a GPS when plotting an efficient route to convey a passenger. But an engagement with literary and cultural texts is a different matter. In its former sense as a set of protocols or procedures, reading resides at the center of disciplinary debates as newly formed schools, theories, or methods rise to challenge dominant notions of understanding literature, film, painting, and other forms. Frequently, these debates focus on tensions between binary oppositions (real or presumed): casual versus professional reading (or fast vs. slow), surface reading versus symptomatic reading, close reading versus distant reading, and others. Like the term “reading,” readers are variously described as “informed,” “ideal,” “implied,” and more. In some theoretical formulations, they are anticipated by texts; in others, readers produce or complete them by filling lacunae or conducting other tasks. Complicating matters further, reading also exists in close proximity to several other terms with which it is often associated: interpretation, criticism, and critique. Issues of “textuality” introduce yet another factor in disagreements about the priorities of critical reading, as notions of a relatively autonomous or closed work or object have been supplanted by a focus on both historical context and a work’s “intertextuality,” or its inevitable relationship to, even quotation of, other texts. In the latter sense of a reading as an intellectual or scholarly product, more variables inform definitions. Every reading of a text, as Paul Ricouer describes, “takes place within a community, a tradition, or a living current of thought.” The term “reading” is complicated not only because of the thing studied but also because of both the historically grounded human subject undertaking the activity and the disciplinary expectations shaping and delimiting the interpretations they produce. And, in the 21st century, technologies and practices have emerged to revise these conversations, including machine learning, computational modeling, and digital textuality.
The presence (or absence) of compositional precursors and leftovers raise for critics and editors methodological, epistemological, ethical, and aesthetic questions: What gets collected and preserved? What does not—for what reasons? How can these materials be interpreted? And to what ends? A draft may refer to written materials that never attain printed form as well as early manuscript compositions and fair copies, typescripts, digital text, scribbles, doodles, leftovers, or other marginalia and extraneous materials that may or may not find their way into archives. The manuscript draft came of age following the invention of printing, although unfinished or working drafts only began to be self-consciously collected with the emergence of the state archive in the late 18th century. The draft is, therefore, intimately connected to the archival, whether the archive is taken as a material site, a discursive structure, or a depository of feeling. Any interpretation of drafts must take into account the limits and limitations of matter including the bare fact of a draft’s material existence or its absence. In the 20th and 21st centuries, there have evolved a diverse network of theoretical approaches to interpreting drafts and compositional materials. Scholars of drafts may ask questions about authorship, materiality, production, technology and media, pedagogy, social norms and conventions, ownership and capital, preservation or destruction, even ethics and ontology. However, these investigations have been most pronounced within four fields: (a) media theory, histories of the book, and historical materialisms that investigate the substance, matter, and means of production of drafts as well as the technological, pedagogical, and social norms that mediate writing, and the cultural/historical specifics of these materials and media; (b) textual editing, which establishes methods that regularize (or complicate) how scholarly editions are produced and related mid-20th century New Bibliography approaches, which illuminated some of the limitations of manuscript-and-edition blind close reading, especially by the New Critics; (c) French genetic criticism in the late 20th and early 21st centuries, which engages with French post-structuralism and psychoanalysis to look at writing as a dynamic and developmental process that has both conscious and unconscious components; and (d) legal scholarship and debates concerning rights to ownership and possession of manuscripts and drafts and their publication, which developed between the 17th and 21st century. These discussions, and their elaboration within national and international legislation, resulted in the invention of copyright, moral rights, and changed understanding of legal rights to privacy and property as well as a division between material and intellectual property, the use and destruction of that property, and the delineation of rights of the dead or the dead’s descendants. The draft manuscript came to be endowed with multiple bodies, both fictive and actual, for which individuals, institutions, corporations, and even nations or the world at large, were granted partial ownership or responsibility. From the late 19th century, the catastrophic legacy of modern warfare and its technologies, including censorship, as well as movements in historical preservation, cultural heritage, and ethics have affected policies regarding ownership and the conservancy of drafts. The emergence of digital and on-line textual production/dissemination/preservation in the late 20th and 21st centuries have broadly transformed the ways that drafts may be attended to and even thought. Drafts must finally be seen to have a complex and intimate relationship to the authorial body and to embodiment, materiality, subjectivity, and writing more generally. Drafts—particularly unread, missing, or destroyed drafts—lie at the border between the dead object and living text. As such, the purposeful destruction of drafts and manuscripts initiates an ontological and ethical crisis that raises questions about the relationship between writing and being, process and product, body and thing.