1-8 of 8 Results  for:

  • British and Irish Literatures x
  • Literary Theory x
  • 19th Century (1800-1900) x
Clear all


The Chapter  

Nicholas Dames

First known as a kephalaion in Greek, capitulum or caput in Latin, the chapter arose in antiquity as a finding device within long, often heterogenous prose texts, prior even to the advent of the codex. By the 4th century ce, it was no longer unusual for texts to be composed in capitula; but it is with the advent of the fictional prose narratives we call the novel that the chapter, both ubiquitous and innocuous, developed into a compositional practice with a distinct way of thinking about biographical time. A technique of discontinuous reading or “consultative access” which finds a home in a form for continuous, immersive reading, the chapter is a case study in adaptive reuse and slow change. One of the primary ways the chapter became a narrative form rather than just an editorial practice is through the long history of the chaptering of the Bible, particularly the various systems for chaptering the New Testament, which culminated in the early 13th century formation of the biblical chaptering system still in use across the West. Biblical chapters formed a template for how to segment ongoing plots or actions which was taken up by writers, printers, and editors from the late medieval period onward; pivotal examples include William Caxton’s chaptering of Thomas Malory’s Morte d’Arthur in his 1485 printing of the text, or the several mises en proses of Chrétien de Troyes’s poems carried out in the Burgundian court circle of the 15th century. By the 18th century, a vibrant set of discussions, controversies, and experiments with chapters were characteristic of the novel form, which increasingly used chapter titles and chapter breaks to meditate upon how different temporal units understand human agency in different ways. With the eventual dominance of the novel in 19th-century literary culture, the chapter had been honed into a way of thinking about the segmented nature of biographical memory, as well as the temporal frames—the day, the year, the episode or epoch—in which that segmenting occurs; chapters in this period were of an increasingly standard size, although still lacking any formal rules or definition. Modernist prose narratives often played with the chapter form, expanding it or drastically shortening it, but these experiments usually tended to reaffirm the unit of the chapter as a significant measure by which we make sense of human experience.


Close Reading  

Mark Byron

Close reading describes a set of procedures and methods that distinguishes the scholarly apprehension of textual material from the more prosaic reading practices of everyday life. Its origins and ancestry are rooted in the exegetical traditions of sacred texts (principally from the Hindu, Jewish, Buddhist, Christian, Zoroastrian, and Islamic traditions) as well as the philological strategies applied to classical works such as the Homeric epics in the Greco-Roman tradition, or the Chinese 詩經 (Shijing) or Classic of Poetry. Cognate traditions of exegesis and commentary formed around Roman law and the canon law of the Christian Church, and they also find expression in the long tradition of Chinese historical commentaries and exegeses on the Five Classics and Four Books. As these practices developed in the West, they were adapted to medieval and early modern literary texts from which the early manifestations of modern secular literary analysis came into being in European and American universities. Close reading comprises the methodologies at the center of literary scholarship as it developed in the modern academy over the past one hundred years or so, and has come to define a central set of practices that dominated scholarly work in English departments until the turn to literary and critical theory in the late 1960s. This article provides an overview of these dominant forms of close reading in the modern Western academy. The focus rests upon close reading practices and their codification in English departments, although reference is made to non-Western reading practices and philological traditions, as well as to significant nonanglophone alternatives to the common understanding of literary close reading.



Rune Graulund

Defining the grotesque in a concise and objective manner is notoriously difficult. When researching the term for his classic study On the Grotesque: Strategies of Contradiction in Art and Literature (1982), Geoffrey Galt Harpham observed that the grotesque is hard to pin down because it is defined as being in opposition to something rather than possessing any defining quality in and of itself. Any attempt to identify specific grotesque characteristics outside of a specific context is therefore challenging for two reasons. First, because the grotesque is that which transgresses and challenges what is considered normal, bounded, and stable, meaning that one of the few universal and fundamental qualities of the grotesque is that it is abnormal, unbounded, and unstable. Second, since even the most rigid norms and boundaries shift over time, that which is defined in terms of opposition and transgression will naturally change as well, meaning that the term grotesque meant very different things in different historical eras. For instance, as Olli Lagerspetz points out in A Philosophy of Dust (2018), while 16th-century aristocrats in France may routinely have received guests while sitting on their night stools, similar behavior exhibited today would surely be interpreted not only as out of the ordinary, but as grotesque. Likewise, perceptions of the normal and the abnormal vary widely even within the same time period, depending on one’s class, gender, race, profession, sexual orientation, cultural background, and so on.



Claire Colebrook

Irony is both a figure of speech and a mode of existence or attitude toward life. Deriving from the ancient Greek term eironeia, which originally referred to lying, irony became a complex philosophical and rhetorical term in Plato’s dialogues. Plato (428/427 or 424/423–348/347 bce) depicts Socrates deploying the method of elenchus, where, rather than proposing a theory, Socrates encounters others in conversation, drawing out the contradictions and opacities of their arguments. Often these dialogues would take a secure concept and then push the questioning to a final moment of non-knowledge or aporia, exposing a gap in a discourse that his interlocutors thought was secure. Here, Socratic irony can be thought of as a particular philosophical method and as the way in which Socrates chose to pursue his life, always questioning the truth of key ethical concepts. In the Roman rhetorical tradition irony was theorized as a rhetorical device by Cicero (106–43 bce) and Quintilian (c.35–c.96 ce), and it was this sense of irony that was dominant until the 18th century. At that time, and in response to the elevation of reason in the Enlightenment, a resurgence of satire emerged: here the rigorous logic of reason was often repeated and in a parodic manner. At this time, modern irony emerged, which was subtly different from satire in that it did not simply lampoon its target, but suggested a less clear position of refined and superior distance. The German philosopher G. W. F. Hegel (1770–1831) was highly critical of what came to be known as Romantic irony, which differed from satire in that it suggested a subtle distance from everyday discourse, with no clear position of its own. This tendency for irony to be the negation of truth claims, without having any clear position of its own, became ever more intense in the 20th century with postmodern irony, where irony was no longer a rhetorical device but became a manner of existing with no clear commitment to any values or beliefs. Alongside this tradition of irony as a distanced relation to one’s speech acts, there was also a tradition of dramatic, cosmic, tragic, or fateful irony, where events might seem to act against human intentions, or where human ambition would seem to be thwarted by a universe that almost seems to be judging human existence from on high.


Literature and Science  

Michael H. Whitworth

Though “literature and science” has denoted many distinct cultural debates and critical practices, the historicist investigation of literary-scientific relations is of particular interest because of its ambivalence toward theorization. Some accounts have suggested that the work of Bruno Latour supplies a necessary theoretical framework. An examination of the history of critical practice demonstrates that many concepts presently attributed to or associated with Latour have been longer established in the field. Early critical work, exemplified by Marjorie Hope Nicolson, tended to focus one-sidedly on the impact of science on literature. Later work, drawing on Thomas Kuhn’s idea of paradigm shifts, and on Mary Hesse’s and Max Black’s work on metaphor and analogy in science, identified the scope for a cultural influence on science. It was further bolstered by the “strong program” in the sociology of scientific knowledge, especially the work of Barry Barnes and David Bloor. It found ways of reading scientific texts for the traces of the cultural, and literary texts for traces of science; the method is implicitly modeled on psychoanalysis. Bruno Latour’s accounts of literary inscription, black boxing, and the problem of explanation have precedents in the critical practices of critics in the field of literature and science from the 1980s onward.


Modern Manuscripts  

Dirk Van Hulle

The study of modern manuscripts to examine writing processes is termed “genetic criticism.” A current trend that is sometimes overdramatized as “the archival turn” is a result of renewed interest in this discipline, which has a long tradition situated at the intersection between modern book history, bibliography, textual criticism, and scholarly editing. Handwritten documents are called “modern” manuscripts to distinguish them from medieval or even older manuscripts. Whereas most extant medieval manuscripts are scribal copies and fit into a context of textual circulation and dissemination, modern manuscripts are usually autographs for private use. Traditionally, the watershed between older and “modern” manuscripts is situated around the middle of the 18th century, coinciding with the rise of the so-called Geniezeit, the Sturm und Drang (Storm and Stress) period in which the notion of “genius” became fashionable. Authors such as Goethe carefully preserved their manuscripts. This new interest in authors’ manuscripts can be part of the “genius” ideology: since a draft was regarded as the trace of a thought process, a manuscript was the tangible evidence of capital-G “Genius” at work. But this division between modern and older manuscripts needs to be nuanced, for there are of course autograph manuscripts with cancellations and revisions from earlier periods, which are equally interesting for manuscript research. Genetic criticism studies the dynamics of creative processes, discerning a difference between the part of the genesis that takes place in the author’s private environment and the continuation of that genesis after the work has become public. But the genesis is often not a linear development “before” and “after” publication; rather, it can be conceptualized by means of a triangular model. The three corners of that model are endogenesis (the “inside” of a writing process, the writing of drafts), exogenesis (the relation to external sources of inspiration), and epigenesis (the continuation of the genesis and revision after publication). At any point in the genesis there is the possibility that exogenetic material may color the endo- or the epigenesis. In the digital age, archival literary documents are no longer coterminous with a material object. But that does not mean the end of genetic criticism. On the contrary, an exciting future lies ahead. Born-digital works require new methods of analysis, including digital forensics, computer-assisted collation, and new forms of distant reading. The challenge is to connect to methods of digital text analysis by finding ways to enable macroanalysis across versions.



Alison Shonkwiler

Realism is a historical phenomenon that is not of the past. Its recurrent rises and falls only attest to its persistence as a measure of representational authority. Even as literary history has produced different moments of “realism wars,” over the politics of realist versus antirealist aesthetics, the demand to represent an often strange and changing reality—however contested a term that may be—guarantees realism’s ongoing critical future. Undoubtedly, realism has held a privileged position in the history of Western literary representation. Its fortunes are closely linked to the development of capitalist modernity, the rise of the novel, the emergence of the bourgeoisie, and the expansion of middle-class readerships with the literacy and leisure to read—and with an interest in reading about themselves as subjects. While many genealogies of realism are closely tied to the history of the rise of the novel—with Don Quixote as a point of departure—it is from its later, 19th-century forms that critical assumptions have emerged about its capacities and limitations. The 19th-century novel—whether its European or slightly later American version—is taken as the apex of the form and is tied to the rise of industrial capitalism, burgeoning ideas of social class, and expansion of empire. Although many of the realist writers of the 19th century were self-reflexive about the form, and often articulated theories of realism as distinct from romance and sentimental fiction, it was not until the mid-20th century, following the canonization of modernism in English departments, that a full-fledged critical analysis of realism as a form or mode would take shape. Our fullest articulations of realism therefore owe a great deal to its negative comparison to later forms—or, conversely, to the effort to resuscitate realism’s reputation against perceived critical oversimplifications. In consequence, there is no single definition of realism—nor even agreement on whether it is a mode, form, or genre—but an extraordinarily heterogenous set of ways of approaching it as a problem of representation. Standard early genealogies of realism are to be found in historical accounts such as Ian Watt’s The Rise of the Novel and György Lukács’ Theory of the Novel and The Historical Novel, with a guide to important critiques and modifications to be found in Michael McKeon’s Theory of the Novel. This article does not retrace those critical histories. Nor does it presume to address the full range of realisms in the modern arts, including painting, photography, film, and video and digital arts. It focuses on the changing status of realism in the literary landscape, uses the fault lines of contemporary critical debates about realism to refer back to some of the recurrent terms of realism/antirealism debates, and concludes with a consideration of the “return” to realism in the 21st century.


Textual Studies  

Mark Byron

Textual studies describes a range of fields and methodologies that evaluate how texts are constituted both physically and conceptually, document how they are preserved, copied, and circulated, and propose ways in which they might be edited to minimize error and maximize the text’s integrity. The vast temporal reach of the history of textuality—from oral traditions spanning thousands of years and written forms dating from the 4th millenium bce to printed and digital text forms—is matched by its geographical range covering every linguistic community around the globe. Methods of evaluating material text-bearing documents and the reliability of their written or printed content stem from antiquity, often paying closest attention to sacred texts as well as to legal documents and literary works that helped form linguistic and social group identity. With the incarnation of the printing press in the early modern West, the rapid reproduction of text matter in large quantities had the effect of corrupting many texts with printing errors as well as providing the technical means of correcting such errors more cheaply and quickly than in the preceding scribal culture. From the 18th century, techniques of textual criticism were developed to attempt systematic correction of textual error, again with an emphasis on scriptural and classical texts. This “golden age of philology” slowly widened its range to consider such foundational medieval texts as Dante’s Commedia as well as, in time, modern vernacular literature. The technique of stemmatic analysis—the establishment of family relationships between existing documents of a text—provided the means for scholars to choose between copies of a work in the pursuit of accuracy. In the absence of original documents (manuscripts in the hand of Aristotle or the four Evangelists, for example) the choice between existing versions of a text were often made eclectically—that is, drawing on multiple versions—and thus were subject to such considerations as the historic range and geographical diffusion of documents, the systematic identification of common scribal errors, and matters of translation. As the study of modern languages and literatures consolidated into modern university departments in the later 19th century, new techniques emerged with the aim of providing reliable literary texts free from obvious error. This aim had in common with the preceding philological tradition the belief that what a text means—discovered in the practice of hermeneutics—was contingent on what the text states—established by an accurate textual record that eliminates error by means of textual criticism. The methods of textual criticism took several paths through the 20th century: the Anglophone tradition centered on editing Shakespeare’s works by drawing on the earliest available documents—the printed Quartos and Folios—developing into the Greg–Bowers–Tanselle copy-text “tradition” which was then deployed as a method by which to edit later texts. The status of variants in modern literary works with multiple authorial manuscripts—not to mention the existence of competing versions of several of Shakespeare’s plays—complicated matters sufficiently that editors looked to alternate editorial models. Genetic editorial methods draw in part on German editorial techniques, collating all existing manuscripts and printed texts of a work in order to provide a record of its composition process, including epigenetic processes following publication. The French methods of critique génétique also place the documentary record at the center, where the dossier is given priority over any one printed edition, and poststructuralist theory is used to examine the process of “textual invention.” The inherently social aspects of textual production—the author’s interaction with agents, censors, publishers, and printers and the way these interactions shape the content and presentation of the text—have reconceived how textual authority and variation are understood in the social and economic contexts of publication. And, finally, the advent of digital publication platforms has given rise to new developments in the presentation of textual editions and manuscript documents, displacing copy-text editing in some fields such as modernism studies in favor of genetic or synoptic models of composition and textual production.