1-2 of 2 Results

  • Keywords: digital writing x
Clear all

Article

Rossana De Angelis

The concept of “text” is ambiguous: it can identify at the same time a concrete reality and an abstract one. Indeed, text presents itself both as an empirical object subject to analysis and an abstract object constructed by the analysis itself. This duplicity characterizes the development of the concept in the 20th century. According to different theories of language, there are also different understandings of “text”: a restricted use as written text, an extensive use as written and spoken text, and an expanded use as any written, verbal, gestural, or visual manifestation. The concept of “text” also presupposes two other concepts: from a generative point of view, it involves a proceeding by which something becomes a text (textualization); from an interpretative point of view, it involves a proceeding by which something can be interpreted as a text (textuality). In textual linguistics, “text” is considered at the same time as an abstract object, issued from a specific theoretical approach, and a concrete object, a linguistic phenomenon starting the process of analysis. In textual linguistics, textuality presents as a global quality of text issued from the interlacing of the sentences composing it. In linguistics, the definition of textuality depends on the definition of text. For instance, M. A. K. Halliday and Ruqaiya Hasan define textuality through the concepts of “cohesion” and “coherence.” Cohesion is a necessary condition of textuality, because it enables text to be perceived as a whole, but it’s not sufficient to explain it. In fact, to be interpreted as a whole, the elements composing the text need to be coherent to each other. But according to Robert-Alain De Beaugrande and Wolfgang Ulrich Dressler, cohesion and coherence are only two of the seven principles of textuality (the other five being intentionality, acceptability, informativity, situationality, and intertextuality). Textual pragmatics deals with a more complex problem: that of the text conceived as an empirical object. Here the text is presented as a unit captured in a communication process, “a communicative unit.” Considered from a pragmatic point of view, every single unit composing a text constitutes an instruction for meaning. Since the 1970s, analyzing connections between texts and contexts, textual pragmatics, has been an important source of inspiration for textual semiotics. In semiotics, the theory of language proposed by Louis T. Hjelmslev, the concept of “text” is conceived above all as a process and a “relational hierarchy.” Furthermore, according to Hjelmslev, textuality consists in the idea of “mutual dependencies,” composing a whole which makes the text an “absolute totality” to be interpreted by readers and analyzed by linguists. Since texts are composed of a network of connections at both local and global levels, their analyses depend on the possibility to reconstruct the relation between global and local dimensions. For this reason, François Rastier suggests that in order to capture the meaning of a text, the semantic analysis must identify semantic forms at different semantic levels. So textuality comes from the articulation between the semantic and phemic forms (content and expression), and from the semantic and phemic roots from which the forms emerge. Textuality allows the reader to identify the interpretative paths through which to understand the text. This complex dynamic is at the foundation of this idea of textuality. Now that digital texts are available, researchers have developed several methods and tools to exploit such digital texts and discourse, representing at the same time different approaches to meaning. Text Mining is based on a simple principle: the identification and processing of textual contents to extract knowledge. By using digital tools, the intra-textual and inter-textual links can be visualized on the screen, as lists or tables of results, which permits the analysis of the occurrences and frequency of certain textual elements composing the digital texts. So, another idea of text is visible to the linguist: not the classical one according to the culture of printed texts, but a new one typical of the culture of digital texts, and their textuality.

Article

E-text  

Niels Ole Finnemann

Electronic text can be defined on two different, though interconnected, levels. On the one hand, electronic text can be defined by taking the notion of “text” or “printed text” as the point of departure. On the other hand, electronic text can be defined by taking the digital format as the point of departure, where everything is represented in the binary alphabet. While the notion of text in most cases lends itself to being independent of medium and embodiment, it is also often tacitly assumed that it is in fact modeled on the print medium, instead of, for instance, on hand-written text or speech. In late 20th century, the notion of “text” was subjected to increasing criticism, as can be seen in the question that has been raised in literary text theory about whether “there is a text in this class.” At the same time, the notion was expanded by including extralinguistic sign modalities (images, videos). A basic question, therefore, is whether electronic text should be included in the enlarged notion that text is a new digital sign modality added to the repertoire of modalities or whether it should be included as a sign modality that is both an independent modality and a container that can hold other modalities. In the first case, the notion of electronic text would be paradigmatically formed around the e-book, which was conceived as a digital copy of a printed book but is now a deliberately closed work. Even closed works in digital form will need some sort of interface and hypertextual navigation that together constitute a particular kind of paratext needed for accessing any sort of digital material. In the second case, the electronic text is defined by the representation of content and (some parts of the) processing rules as binary sequences manifested in the binary alphabet. This wider notion would include, for instance, all sorts of scanning results, whether of the outer cosmos or the interior of our bodies and of digital traces of other processes in-between (machine readings included). Since other alphabets, such as the genetic alphabet and all sorts of images may also be represented in the binary alphabet, such materials will also belong to the textual universe within this definition. A more intriguing implication is that born-digital materials may also include scripts and interactive features as intrinsic parts of the text. The two notions define the text on different levels: one is centered on the Latin, the other on the binary alphabet, and both definitions include hypertext, interactivity, and multimodality as constituent parameters. In the first case, hypertext is included as a navigational, paratextual device; whereas in the second case, hypertext is also incorporated in the narrative within an otherwise closed work or as a constituent element on the textual universe of the web, where it serves the ongoing production of (possibly scripted) connections and disconnections between blocks of textual content. Since the early decades of early 21st century still represent only the very early stages of the globally distributed universe of web texts, this is also a history of the gradual unfolding of the dimensions of these three constituencies—hypertext, interactivity, and multimodality. The result is a still-expanding repertoire of genres, including some that are emerging via path dependency; some via remediation; and some as new genres that are unique for networked digital media, including “social media texts” and a growing variety of narrative and discursive multiple-source systems.