Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Criminology and Criminal Justice. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 20 September 2024

Big Data and Visualityfree

Big Data and Visualityfree

  • Janet ChanJanet ChanUNSW Sydney, Department of Law

Summary

Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness.

Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.

Subjects

  • Crime, Media, and Popular Culture

The Rise of Big Data

Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data promise new opportunities for the harvesting of information, but threaten to overload existing resources for making sense of this information. The use of Big Data for criminal justice and crime control is a relatively new development. Chan and Bennett Moses (2016a) have observed that there are two main areas where Big Data has overlapped with criminology: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness (Bennett Moses & Chan, 2014, 2016). This article focuses on the visualization of Big Data and explores its implications for criminal justice.

The article will unfold as follows. The next section provides an introduction to how Big Data is defined, its potential benefits and pitfalls. This is followed by a proposition that Big Data technology (including algorithms and visualizations) is performative rather than representational. The performance of Big Data visualization is then analyzed in terms of the meaning of visuality, the economy of attention, the politicality of visualization, and the affective consequences of visual representation. The concluding section discusses the implications of this analysis for visual criminology.

Big Data: Potential Benefits and Pitfalls

The term Big Data has been understood in a variety of ways (see Bennett Moses & Chan, 2014; Chan & Bennett Moses, 2016a, 2016b). A popular definition, focusing mainly on data alone, refers to the three Vs: the volume, velocity, and variety of information that is currently being produced and processed. Related to these characteristics of the data is a range of new software tools used to capture, manage, summarize, analyze, and represent this deluge of information. As Chan and Bennett Moses (2016b, p. 1) point out, some see Big Data as a vaguely defined marketing term, preferring instead to focus on “data science” or “data analytics.” Taking a broader perspective, boyd and Crawford (2012, p. 663) describe Big Data as “a cultural, technological, and scholarly phenomenon” involving technology, analysis, and mythology.

The potential benefits of Big Data technology have been widely touted, especially in commercial contexts (Manyika et al., 2011; Morozov, 2013). As Chan and Bennett Moses (2016b, p. 21) point out, Big Data has also been promoted as a powerful tool for policing and security (Olesker, 2012; Podesta, Priztker, Moniz, Holdren, & Zients, 2014; Staniforth & Akhgar, 2015; Wyllie, 2013). More specifically, the use of predictive tools for forecasting where and when crime will occur is attracting a great deal of interest among police organizations, especially in the United States (see Bennett Moses & Chan, 2016).

After the initial hype, the potential pitfalls of Big Data are more widely discussed. It turns out that some of the claims made on behalf of Big Data—that sampling is not necessary because we now have access to all the data, that data accuracy is no longer a problem, and that correlation is “good enough” (Anderson, 2008; Mayer-Schönberger & Cukier, 2013)—are not entirely true. These claims have been openly contested in the popular media; as one commentator observed, “the idea that ‘with enough data, the numbers speak for themselves’ . . . seems hopelessly naïve in data sets where spurious patterns vastly outnumber genuine discoveries” (Harford, 2014). A number of other commentators have already noted that Big Data that is based on convenience samples or subsets is very likely to be biased (boyd & Crawford, 2012; Crawford, 2013; Harford, 2014; Zoldan, 2013). Far from being irrelevant, data quality is likely to be just as important in the context of Big Data (Chan & Bennett Moses, 2016a; Tonelli, 2016). Relying on correlations alone is problematic, especially when criminal justice interventions are considered (Chan & Bennett Moses, 2016a; Harcourt, 2007). The list of pitfalls does not end there, as Marcus and Davis (2014) observe, tools based on Big Data “can easily be gamed” and the results “often turn out to be less robust than they initially seem”; above all, they argue that the hype generated by champions of Big Data must be put in perspective:

big data can work well as an adjunct to scientific inquiry, but rarely succeeds as a wholesale replacement . . . big data is prone to giving scientific-sounding solutions to hopelessly imprecise questions . . . Big data can reduce anything to a single number, but you shouldn’t be fooled by the appearance of exactitude . . . It’s an important resource for anyone analyzing data, not a silver bullet.

(Marcus & Davis, 2014; see also Drozhzhin, 2016)

Not only can Big Data make mistakes that give rise to vicious cycles (Marcus & Davis, 2014), these mistakes can be broadcast universally and their damage magnified “in nanoseconds” because of “our enhanced technology, global interconnectivity, and huge data sizes” (Zoldan, 2013).

The Performativity of Big Data and Algorithms

In spite of these pitfalls, Big Data is not about to stop growing, either in volume or in prominence. It is therefore important to examine the nature of Big Data behind all the hype. Some researchers have questioned the assumption that Big Data is primarily a representation of reality. Matzner (2016), for example, points to two distinctive characteristics of Big Data that set it apart from conventional statistical representation of data. First, the inductive, exploratory approach of Big Data analytics stretches the old saying “the data speaks for itself” to “data tells us what to look for” (2016, pp. 199–200). Second, the decoupling of data generation from data analysis means that not only are “data from one context . . . analysed in another context,” but “data become data for a program only in the moment a request is made to that program” (2016, pp. 199–200). In other words, data does not have a meaning independent of its usage. Using data for surveillance, for example, changes the meaning of the data. Similarly, data subjects do not have an independent identity apart from their “data doubles.” Hence Matzner (2016) has argued that data is performative rather than representational.

In arguing that data is performative, Matzner (2016) drew on Butler’s (1990, 1993) theory of performativity. Butler in turn developed her theory based on Austin’s (1975) concept of “performance utterances,” which through naming (e.g., pronouncement of two persons as man and wife in a nuptial ceremony), produce the entities being named (see also Hall, 2000). The power of these utterances (or speech acts) does not derive from intention or authority of the speaker, but from its citationality, “its relation to similar acts, where particular structures of power and authority are already established” (Matzner, 2016, p. 205). Thus Big Data literally makes up data subjects, and data subjects are themselves complicit to this performance: “We cite all kinds of data in becoming who we are” (2016, pp. 205–207). In situations of surveillance, Big Data produces suspects:

The recombining, relating, and moving to different contexts of data, which happens in data-based surveillance . . . assembles the authority to produce a new subject . . . by “calling” it a suspect . . . Somebody who is stopped at a border, denied a visa, or excluded from boarding a plane based on Big Data becomes a subject for the respective authorities in the very moment these verdicts happen. It is quite probably that there has been no prior relation of these authorities and the subject. A lot of the data-processing goes on in the background and does not engage with subjects at all, nor does it concern data about subjects . . . But such data-processing assembles the resources for ad hoc, heterogeneous acts with strong subjectivizing force. The promise of Big Data is to have enough data ready—enough resources to cite—that allows [it] to “judge” everyone.

(Matzner, 2016, pp. 206–207)

A great deal of what Matzner (2016) refers to as “data processing” is carried out by computer programs in the form of algorithms. Thus much of the performance of Big Data is not the work of data per se but of algorithms. As Amoore and Piotukh (2016, p. 5) point out, algorithmic calculative devices in the age of Big Data are “re-making our world in important ways.” First, by filtering what can be seen, these devices “create novel ways of perceiving the world and new visibilities and invisibilities” (2016, p. 5). For policing and security agencies, algorithms can help “draw the needle out of a haystack” by processing “a vast background of structured and unstructured data” (Amoore & Piotukh, 2016, p. 7; Great Britain, House of Commons, 2015). Second, these devices, together with cloud computing and information-sharing regimes, are changing conceptions of territory and sovereignty. Third, algorithmic devices in the age of Big Data “significantly reorient the temporalities of our world”: for example, “advanced event stream analytics” is transforming “the temporal relations of past, present and future, as close to “real-time” event data is processed in association with stored data on past events, in anticipation of a future that may be seconds away” (Amoore & Piotukh, 2016, pp. 7–8). Finally, these devices “transform the nature of human subjectivity” by using “new forms of data aggregation and knowledge discovery,” creating advanced models of human behavior as well as by commodifying the “propensities and tendencies of life” (2016, p. 9, original emphasis).

The idea that program code or software is performative is not new. Over 10 years ago, McKenzie (2005, p. 73) analyzed the Linux kernel as “a form of collective agency in the process of constituting itself”; this constitution is “performative with respect to the efficacy of Linux as a technical object and with respect to the fabrication of Linux as a cultural entity.” More recently, Introna (2016) draws on Foucault’s governmentality framework to consider the performative nature of algorithms (in this case, Turnitin) as a technology of governance that produces suspects and enacts self-governing subjects. An algorithm such as Turnitin functions as a technology to govern plagiarism by calculating a “similarity index” between documents and assigns one of five color codes (blue, green, yellow, orange, and red) to the degree of matching. Introna (2016, p. 21) has argued that algorithms—usually summarized as “logic + control”—should not be understood solely in terms of what they do, but more in relation to their “implicit operations or assumptions” that are not obvious. The danger of algorithms is often conceived of in terms of their inscrutability (difficult to understand) and executability (can operate automatically) (2016, p. 25). This danger is particularly acute with Big Data and the use of machine learning or other complex algorithms to predict offending or re-offending (see Bennett Moses & Chan, 2014, 2016; Chan & Bennett Moses, 2016a).

The Performance of Big Data Visualization

The visualization of Big Data introduces a new quality to the performance of Big Data technology—visuality. Before discussing this new quality, it is important to examine the concept of visuality and its relationship to visibility.

Visuality, Visibility, and Big Data

The meaning of the word “visuality” has evolved over the years. As Sand (2012, p. 89) points out, the Oxford English Dictionary’s four definitions of visuality—(a) “The state or quality of being visual or visible to the mind; mental visibility”; (b) “A mental picture or vision”; (c) “Vision, sight”; (d) “Visual aspect or representation; physical appearance”—have overlooked a “newly complex and technical” use of visuality in art history since 1988. In this context, visuality has come to mean “the element of visual experience that is contingent on culture and therefore far more unstable and resistant to description than even the most complex of biological functions” (2012, p. 89). This fifth meaning of the term is explained by Foster (1988, p. ix), when he discusses two aspects of the visual and their connections: “Although vision suggests sight as a physical operation, and visuality sight as a social fact, the two are not opposed as nature to culture: vision is social and historical too, and visuality involves the body and the psyche.” Thus the term visuality relates to both the quality of (physical and mental) visibility and the (social, historical, cultural) construction of visual experience. More recently, Mirzoeff (2011, p. xv) has traced the genealogy of visuality and tied it more directly to authority: “I consider visuality to be both a medium for the transmission and dissemination of authority, and a means for the mediation of those subject to that authority.” For Mirzoeff, visuality is a discursive practice that classifies, separates, and aestheticizes history in such a way that makes the domination of authority seem natural and self-evident. Countervisuality, on the other hand, is “the right to look,” which “claims autonomy from this authority, refuses to be segregated, and spontaneously invents new forms” (2011, p. 4). As Halasz (2013, p. 90) points out, “[a]t stake in these battles between visuality and countervisuality is cultural and political power.”

In his analysis of the work of CCTV operators, Smith (2014, pp. 15–16) conceptualizes operators’ work as a form of “supervisory practice” partly defined by visibility and visuality as “co-relational, iterative, and conjoined entities.” For Smith, CCTV’s supervision of the streets depends on a “visibility-visuality alternation, where visibility exertions (a projected gaze) interweave with visuality spectacles (a reflected outlook)” (2014, p. 13). Smith’s analysis suggests that it is through the “co-productive” processes of “visibility making (revealing things)” and “visuality processing (construing things)” that abnormalities are defined, decisions made, and interventions directed within supervisory systems (2014, p. 21). Empowered to both reveal (e.g., select and focus) and construe (e.g., decide on appropriate action), CCTV camera operators in Smith’s research often act as risk assessors and gatekeepers of police deployment.

The discussion of visibility and visuality so far has assumed the centrality of the human body, especially its capacity for vision. As Paglen (2016) has observed, over the last decade or so, with the massive digitation of images, visual culture “has become detached from human eyes and has largely become invisible”:

What’s truly revolutionary about the advent of digital images is the fact that they are fundamentally machine-readable: they can only been seen by humans in special circumstances and for short periods of time . . . However, the image doesn’t need to be turned into human-readable form in order for a machine to do something with it . . . The fact that digital images are fundamentally machine-readable regardless of a human subject has enormous implications. It allows for the automation of vision on an enormous scale and, along with it, the exercise of power on dramatically larger and smaller scales than have ever been possible.

(Paglen, 2016)

Thus, even where human-to-human sharing (such as sharing photographs on social media) appears to be the primary purpose of some digital platforms, any analogy with previous practices of sharing physical photographs with friends is “deeply misleading”: uploading an image on social media amounts to “feeding an array of immensely powerful artificial intelligences systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more.” More fundamentally,

In aggregate, AI systems have appropriated human visual culture and transformed it into a massive, flexible training set. The more images Facebook and Google’s AI systems ingest, the more accurate they become, and the more influence they have on everyday life. The trillions of images we’ve been trained to treat as human-to-human culture are the foundation for increasingly autonomous ways of seeing that bear little resemblance to the visual culture of the past.

(Paglen, 2016)

In spite of this, the performance of Big Data often engages with visuality. The data “deluge” is not necessarily seen as a “river of gold” bearing business opportunities and tools for innovation; more likely it promotes anxieties among “surveillers” relating to missing clues as well as overload: “that no matter how much data they have, it is always incomplete, and the sheer volume can overwhelm the critical signals in a fog of possible correlations” (Crawford, 2014). To the rescue are myriad analytical tools that promise to translate the rapidly building mountains of information into simple, accessible visual cues to overcome the limitation of the human brain to digest large volumes of complex information.

As Leese (2016) points out, the “strength of vision,” based on a sense of ocularcentrism, has remained an unshakable assumption among the “exact” sciences, where information visualization (infoviz) has emerged as a dominant way of presenting insights from data. Seen as a rigorous and neutral technique, infoviz or dataviz is nevertheless a matter of design as well as science:

The toolbox of infovis is in fact stacked with elaborate measures of visual representation, including the likes of scatterplots, box plots, heat maps, 3D coordinates, link graphs, treemaps, node-link diagrams, and many more . . . Moreover, taking advantage of design principles that facilitate cognitive access, visual elements such as colours, positions, motions, orientations, size, shape, saturation or hue . . . have been incorporated to enhance the presentation, and thus the understandability of data. By asking questions such as: “What is the best way of visualizing data? What choice of colour best supports the communication of properties we are interested in? Does shape and placement help improve perception?” (as cited in Marty, 2009, p. 9), it becomes quite clear that the infovis community conceives itself as a field that seeks to provide neutral techniques that are completely devoid of politics.

(Leese, 2016, p. 145)

The production of visuality is a practice that is political, as images operate in an economy where visuality is invested with an amount of “attention capital” that is enmeshed with various forms of political capital.

The Attention Economy

Georg Franck (1999, 2002) has written about the “scientific economy of attention,” where the attention received by scientists is “capitalised into the asset called reputation” (2002, p. 3). The economy of attention operates slightly differently in risk culture. The routine accumulation of massive volumes of information through communications and sensing devices does not demand attention in itself. Rather, it is the capacity of multinational commercial enterprises to turn these volumes of information into profit that shifts attention to these data collections, giving weight to the label “Big Data.” It is the potential of global intelligence apparatuses to make use of these data for surveillance and risk identification that alerts community groups and citizens to pay attention to the dangers of dataveillance and profiling of suspects. It is the uncomfortable realization that social media information can be manipulated (or automatically generated) to spread false news, incite emotions and actions, and influence thinking and opinions that bring to the attention of users and non-users the social harms that such communications can bring. Hence data has little intrinsic attention capital independent of its usage and societal reactions to its usage.

Nevertheless, data visualizations are not all equal in commanding attention: some are more memorable than others. In an online experiment researchers exposed 261 participants to 410 images of data visualization to measure the extent to which an image was readily detected (Borkin, Vo, Isola, Sunkavalli, Oliva, & Pfister, 2013). They found that “higher memorability scores were correlated with visualizations containing pictograms, more color, low data-to-ink ratios, and high visual densities” (2013, p. 2312). Perhaps not surprisingly, diagrams were found to be more memorable than bar charts, line graphs, and tables.

The performance of visuality—both in the sense of making data issues visible and reconfiguring data through visualization—clearly adds value to Big Data in the economy of attention. How attention capital is intertwined with political capital is examined below.

The Performance of Visuality and Its Consequences

Leese (2016) has argued that, in specific cases such as risk flags on PayPal and security alerts in airports, “translations of algorithmic calculations into visualisations” create an “affective startle” that can trigger reactions that over-ride other considerations:

complexity becomes transformed into reductionist binaries that unfold a particularly “dark” set of future narratives, as they turn fine-grained risk assessment into simple threat scenarios that must be acted upon . . . Anticipation then becomes dominated by an affective startle that intentionally seeks to override the full spectrum of contingency and prioritises a particularly bleak set of threatening futures that become folded back into the present.

(Leese, 2016, p. 143)

For Leese, the “politicality of visuality relies on opacity instead of transparency, and on reduction of complexity and contextuality—until all that remains is a plain visual artifact that produces no clear statement but the garnering of attention” (2016, p. 149). Ultimately, the affective triggers “create an immediate atmosphere of suspicion . . . [which] prioritises a distinctive set of negative narratives over possible more positive others” (2016, pp. 153–154). More generally, algorithms “function as a means of directing and disciplining attention, focusing on specific points and cancelling out all other data, appearing to make it possible to translate probable associations between people or objects into actionable security decisions” (Amoore, 2009, p. 22).

The performance of data visualization can have consequences for social groups, as demonstrated by a study of Sentiment in New York City, conducted by the New England Complex Systems Institute (NECSI) in 2013. The results, based on an algorithm to “automatically classify tweets as positive or negative, and use their geotags and time information to paint a map” (Bertrand, Bialik, Virdee, Gros, & Bar-Yam, 2013) mistakenly linked a high school with the lowest sentiment in Manhattan. The researcher admitted later that “the negative tweets were not primarily from the high school, but rather from a region just south of the school. In particular, a single twitter account was the source of many of the tweets classified as having negative sentiment in this area” (Kaplan, 2016; Kilgannon, 2013a, 2013b). This prompted a commentator to urge caution in data visualizations:

Data visualizations might seem inert, but there are many ways they can cause harm. A visualization might bring unwanted attention to a person or a group (or a high school). A map can trivialize human experience, by reducing life to a dot or a vector. Representations of violent or tragic events can be traumatic to people that were directly or indirectly involved. We’re well trained to be aghast at a truncated Y-axis or an un-squared circle; we need to expand our criticality to include the possible social impacts of “well-made” visualizations.

(Thorp, 2016, p. 15)

In general, as Kennedy, Hill, Aiello, and Allen (2016) have argued, data visualization is not a neutral or automated process but can be a tool for political ends. Apparently “technical” conventions of visualization do ideological work “to imbue visualisations with the quality of objectivity (which brings together other qualities such as transparency, scientific-ness and facticity)” while masking the power implications of these conventions (2016, p. 716). This entanglement of power and practice is complex, so that perceived technical qualities of visualization are not always independent of gender, class, or age related cultural assumptions (Hill, Kennedy, & Gerrard, 2016).

Data Visualization for Justice

While Leese (2016) has focused on the performance of Big Data visualization in creating suspicion and making suspects, this kind of visualization can also help to concentrate attention on criminal harms and injustices. Three examples are discussed.

One example is the use of standard statistical representations (line graphs, bar charts, and data maps) to visualize the unprecedented growth of imprisonment in the United States (Shannon & Uggen, 2013). The authors want to address the “relative lack of attention to the punishment boom” by communicating complex trends through visual narratives. Their visuals document the steep climb in incarceration rates since the early 1970s, the massive growth in correctional populations (including probation and parole), the increase in imprisonment and correctional control of African Americans in every state, and the extent of “felon disenfranchisement” in some states. Told in colorful graphics with accessible text, the narrative calls for alternative visions of penal policy choices.

A second example (see Figure 1) is an interactive visualization of gun deaths in the United States in 2010 and 2013 by Persicopic. It is a powerful representation of the tragedy of gun violence in America. Each death is animated as an arc across an axis of age. By clicking on each arc, the viewer is shown a brief summary of the name, sex, age, location, and date of death by shooting, and the normal life expectancy of the victim cut short by the death. Viewers can filter the information by sex, age group, region, or time period, as well as read summaries of “What This Data Reveals.” The visualization in 2013 was based on different sources of data from 2010. While the 2010 gun deaths data came from the FBI’s Uniform Crime Reports, the 2013 data was based on real-time crowd-sourced information from online news sources, through the anonymous Twitter user @GunDeaths. The website acknowledges that the 2013 data was less reliable and included suicides as well as homicides.

Figure 1. US gun killings in 2010.

Source: US Gun Deaths, The Stolen Years© (2013). Periscopic[www.periscopic.com], reproduced with permission.

The third example involves an ambitious project by ProPublica exposing the bias of Big Data analytics (Angwin, Larson, Mattu, & Kirchner, 2016—see Figure 2). It contains mostly textual content and visualized case studies and only two bar charts. By comparing the risk assessment scores generated by a computer algorithm on sets of two offenders convicted of the same crime, the ProPublica study not only demonstrates the “remarkably unreliable” nature of the scores, it points to the “significant racial disparities” generated by the software: “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants” and “White defendants were mislabelled as low risk more often than black defendants.” This effect remains even after criminal history and demographics (age and gender) are controlled for. The website provides various documents (sentencing reports with risk assessments), full data used in their analysis and a detailed account of how they analyzed the algorithm. It also includes a response by the company Northpointe, which created the software: the company “criticized ProPublica’s methodology and defended the accuracy of its test”; the proprietary nature of the specific calculations, however, prevented a more transparent analysis of the risk scores.

Figure 2. Comparison of risk scores between two arrestees.

Source: Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016).Machine Bias. ProPublica. Permission being sought.

These examples reflect the “belief that visualisations can promote data transparency and awareness . . . a belief in the benefits of information as a means of empowerment” (Kennedy et al., 2016, p. 721). Though not direct contestations of the powerful’s right to conceal (see Brown’s “Visual Criminology”), these visualizations are useful for revealing harms, injustices, and practices that are not normally visible. They are “counter-visuals” that attempt to present aggregate trends and, in the third example, also individualized portraits (Brown, 2014). Such attempts, of course, require a great deal of resources and expertise not usually available to community groups, so that differential access to resources and visualization strategies is likely to favor well-resourced agencies and sophisticated users.

Anesthetic Effects of Visual Representation

Our analysis of visuality of Big Data so far has focused on one possible way in which affective responses are evoked—through the affect that visualization could trigger. But it is possible to argue the opposite, that data-based visuality could dampen emotionality or at least insulate readers from the deeper level of emotionality that case-based visuality is capable of evoking. The evidence that visuality can dampen emotionality even in cases of extreme violence has been a subject of discussion among visual criminologists. Carrabine (2012) cites Susan Sontag (1977) in relation to photographic images of atrocity, which can both “transfix” and “anesthetize,” so that photography can “deaden conscience” just as easily as “arouse it.” In discussing the ethics of representation, Carrabine emphasizes that the central issue “is a concern with our relationships to each other, and that these relationships must be premised on an understanding of difference” (2012, p. 466). In fact, his analysis of documentary photography cites critics who saw the camera as aestheticizing tragedies and deadening emotions:

Leading critics like Sontag (1977), Berger (1980) and Solomon-Godeau (1991), who have been influenced by Benjamin’s thinking, have each accused the documentary photograph of favouring a detached sentimentalism that mystifies political and historical reality. The complaint is that much photojournalism and social documentary exploit the other and reinforce the differences between the superior and inferior. The charge is that to aestheticize tragedy will ultimately deaden the feelings of those who witness suffering . . . (Carrabine, 2012, p. 478).

The advent of digital photography, the Internet, and social media may have democratized the production of images and brought about increased opportunities for amateurs to “bear witness” to crimes and injustices, but even disturbing scenes from Abu Ghraib did not “provoke so much shock as recognition amongst many commentators . . . these violations of humanity scarcely trouble consciences—a view borne out by the banal response to the cruelty in so many sections of American public opinion” (Carrabine, 2012, pp. 485–486).

It can be argued that data visualization, by removing the visibility of real people or events and aestheticizing the representations in the race for attention, can achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact. In this sense the ProPublica presentation of text with photographs of real people, their predictive scores and subsequent offending record may be read as more emotionally charged than the Periscopic visuals of gun deaths, even though details of individual victims can be read; and both presentations are arguably more powerful than the standard bar charts and heat maps, such as those used by Shannon and Uggen (2013). As Brown (2014, pp. 181–182) points out, the unpredictability of interpretation of visual images is part of process: “Regardless of its uncertain outcomes, including voyeuristic spectacle, egregious appropriations, and silent apathy, the act of representation remains a vital form of social engagement.”

Future Research on Big Data and Visuality

Berry and Dieter (2015, p. 2) have labelled the current era of socio-technological development as “post-digital,” in the sense that the distinction between the digital and the non-digital has become “increasingly blurred.” In fact, the difference between being “online” and being “offline” is now problematic as “[c]omputation becomes experiential, spatial, and materialized in its implementation, embedded within the environment and embodied, part of the texture of life itself but also upon and even within the body” (2015, p. 3). Big Data has emerged out of this development. Berry has suggested comparing the digital in the postdigital world to an iceberg, “with only a small part ever visible to everyday life” (2015, Figure 4.1, p. 47). Underneath the visible digital traces on the surface are files, databases, algorithms, code, and “concrete” computation machinery. Data visualization is part of the postdigital “abductive aesthetic (or pattern aesthetic)” concerned with recognizing patterns that are “hidden in sets of data” (2015, p. 53, original emphasis). The danger of pattern aesthetic is that it might lead to “a form of apophenia, that is, the experience of seeing meaningful patterns or connections in random or meaningless data” (2015, p. 53; see also boyd & Crawford, 2014).

Young (2014, p. 159) has argued that images are not objects but “constitutive elements of the discursive field.” This implies that Big Data visualization is best read as an “encountered sign” and treated as an “active semiotic event whose meaning is generated through the spectator’s encounter with [it]” (2014, p. 170). In this essay questions are asked about how Big Data visualization works and what affect it engenders in criminal justice. The analysis suggests that Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. The performativity of data visualization is an area that is ripe for further research in visual criminology.

Links to Digital Materials

  • Rucker, P. (Artist, producer). (2009). Proliferation. Uses animated mapping and an original score to depict the growth of the U.S. prison system. Each dot corresponds to a new prison. The source data is based on GIS work by Rose Heyer at the Prison Policy Institute.
  • Shannon, S., and Uggen, C. (2013). Visualizing punishment. The Society Pages. This blog makes use of simple graphs, bar charts, and mapping to show the unprecedented rise in U.S. imprisonment rate (with some international and temporal comparisons).
  • Casselman, B., Conlen, M., and Fischer-Baum, R. (n.d.). Gun deaths in America. An interactive graphic on gun deaths in America. FiveThirtyEight.
  • U.S. gun deaths in 2013. Periscopic. A dramatic interactive visualization of gun deaths in America showing “near real time crowdsourced” data, shows both trends and individual cases.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. An in-depth analysis of the racial bias of computer algorithms in predicting future crime.

Further Reading

  • Amoore, L., & Piotukh, V. (Eds.). (2016). Algorithmic life: Calculative devices in the age of big data. New York: Routledge.
  • Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press
  • Bennett Moses, L., & Chan, J. (2016). Algorithmic prediction in policing: Assumptions, evaluation, and accountability. Policing & Society.
  • Bennett Moses, L., & Chan, J. (2014). Using big data for legal and law enforcement decisions: Testing the new tools. UNSW Law Journal, 37(2), 643–678.
  • Berry, D. M. (2015). The postdigital constellation. In D. M. Berry (Ed.), Postdigital aesthetics (pp. 44–57). Basingstoke, U.K.: Palgrave Macmillan.
  • Berry, D. M. & Dieter, M. (2015). Thinking postdigital aesthetics: Art, computation and design. In D. M. Berry (Ed.), Postdigital aesthetics (pp. 1–11). Basingstoke, U.K.: Palgrave Macmillan.
  • boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication and Society, 15, 662–679.
  • Brown, M. (2014). Visual criminology and carceral studies: Counter-images in the carceral age. Theoretical Criminology, 18(2), 176–197.
  • Chan, J., & Bennett Moses, L. (2016a). Is big data challenging criminology? Theoretical Criminology, 20(1), 21–39.
  • Chan, J., & Bennett Moses, L. (2016b). Making sense of big data for security. British Journal of Criminology, 57(2), 299–319.
  • Kitchin, R. (2014). Big data, new epistemologies and paradigm shifts. Big Data and Society, 1, 1–12.
  • Kennedy, H., Hill, R. L., Aiello, G., & Allen, W. (2016). The work that visualisation conventions do. Information, Communication, & Society, 19(6), 715–735.
  • Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. London: John Murray.
  • Mirzoeff, N. (2006). On visuality. Journal of Visual Culture, 5(1), 53–79.
  • Sand, A. (2012). Visuality. Studies in Iconography, 33, 89–95.

References

  • Amoore, L. (2009). Lines of sight: On the visualization of unknown futures. Citizenship Studies, 13(1), 17–30.
  • Amoore, L., & Piotukh, V. (Eds.). (2016). Algorithmic life: Calculative devices in the age of big data. New York: Routledge.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.
  • Bennett Moses, L., & Chan, J. (2014). Using big data for legal and law enforcement decisions: Testing the new tools. UNSW Law Journal, 37(2), 643–678.
  • Bennett Moses, L., & Chan, J. (2016). Algorithmic prediction in policing: Assumptions, evaluation, and accountability. Policing & Society.
  • Berry, D. M. (2015). The postdigital constellation. In D. M. Berry (Ed.), Postdigital aesthetics (pp. 44–57). Basingstoke, U.K.: Palgrave MacMillan.
  • Berry, D. M., & Dieter, M. (2015). Thinking postdigital aesthetics: Art, computation, and design. In D. M. Berry (Ed.), Postdigital aesthetics (pp. 1–11). Basingstoke, U.K.: Palgrave Macmillan.
  • Bertrand, K. Z., Bialik, M., Virdee, K., Gros, A., Bar-Yam, Y. (2013). Sentiment in New York City: A high resolution spatial and temporal view. New England Complex Systems Institute.
  • Borkin, M. A., Vo, A. A., Isola, P., Sunkavalli, S., Oliva, A., & Pfister, H. (2013). What makes a visualization memorable? IEEE Transactions on Visualization and Computer Graphics, 19(12), 2306–2315.
  • boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, and Society, 15, 662–679.
  • Brown, M. (2014). Visual criminology and carceral studies: Counter-images in the carceral age. Theoretical Criminology, 18(2), 176–197.
  • Butler, J. (1993). Bodies that matter: on the discursive limits of “sex.” New York: Routledge.
  • Butler, J. (1990). Gender trouble: feminism and the subversion of identity. New York: Routledge
  • Carrabine, E. (2012). Just images: Aesthetics, ethics and visual criminology. British Journal of Criminology, 52, 463–489.
  • Chan, J., & Bennett Moses, L. (2016a). Is big data challenging criminology? Theoretical Criminology, 20(1), 21–39.
  • Chan, J., & Bennett Moses, L. (2016b). Making sense of big data for security. British Journal of Criminology, 57(2), 299–319.
  • Crawford, K. (2013). The hidden biases in big data. Harvard Business Review.
  • Crawford, K. (2014). The anxieties of big data. The New Inquiry.
  • Couldry, N., & Powell, A. (2014). Big data from the bottom up. Big Data & Society, 1(2), 1–5.
  • Drozhzhin, A. (2016). Big data flaws we need to address. Kaspersky Lab Daily.
  • Foster, H. (Ed.). (1988). Vision and visuality. Seattle, WA: Bay Press.
  • Franck, G. (1999). Scientific communication: A vanity fair? Science, 286(5437), 53–55.
  • Franck, G. (2002). The scientific economy of attention: A novel approach to the collective rationality of science. Scientometrics, 55(1), 3–26.
  • Great Britain, Parliament, House of Commons, Intelligence and Security Committee. (2015). Privacy and security: A modern and transparent legal framework. Intelligence and Security Committee of Parliament. London: Stationery Office.
  • Halasz, J. R. (2013). Review of the right to look: A counterhistory of visuality. Visual Studies, 28(1), 89–91.
  • Hall, K. (2000). Performativity. Journal of Linguistic Anthropology, 9(1–2), 184–187.
  • Harcourt B. E. (2007). Against prediction: Profiling, policing and punishing in an actuarial age. Chicago: University of Chicago Press.
  • Harford, T. (2014, March). Big data: Are we making a big mistake? Financial Times.
  • Hayward, K., & Presdee, M. (Eds.). (2010). Framing crime: Cultural criminology and the image. London: Routledge.
  • Hill, R. L., Kennedy, H., & Gerrard, Y. (2016). Visualizing junk: Big data visualizations and the need for feminist data studies. Journal of Communication Inquiry, 40(4), 331–350.
  • Introna, L. D. (2016). Algorithms, governance, and governmentality: On governing academic writing. Science, Technology, & Human Values, 41(1), 17–49.
  • Kaplan, I. (2016). The way we visualize big data is broken. Artsy Editorial
  • Kennedy, H., Hill, R. L., Aiello, G., & Allen, W. (2016). The work that visualization conventions do. Information, Communication, & Society, 19(6), 715–735.
  • Kilgannon, C. (2013a). An elite public school is the saddest spot in Manhattan, a study says. The New York Times 25 September.
  • Kilgannon, C. (2013b). A high school is actually not Manhattan’s saddest spot, a research says. The New York Times, 14 October.
  • Kitchin, R. (2014). Big data, new epistemologies, and paradigm shifts. Big Data and Society, 1, 1–12.
  • Leese, M. (2016). “Seeing futures”: Politics of visuality and affect. In L. Amoore & V. Piotukh (Eds.), Algorithmic life: Calculative devices in the age of big data (pp. 143–164). New York: Routledge.
  • Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., et al. (2011). Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute.
  • Marcus, G., & Davis, E. (2014). Eight (no, nine!). Problems with big data. The New York Times, April 6.
  • Marty, R. (2009). Applied Security Visualization. Upper Saddle River: Addison-Wesley.
  • Matzner, T. (2016). Beyond data as representation: The performativity of big data in surveillance. Surveilliance and Society, 14(2), 197–210.
  • Mayer-Schönberger, V. & Cukier, K. (2013). Big data: A revolution that will transform how we live, work and think. London: John Murray.
  • McKenzie, A. (2005). The performativity of code: Software and cultures of circulation. Theory, Culture & Society, 22(2), 71–92.
  • Mirzoeff, N. (2006). On visuality. Journal of Visual Culture, 5(1), 53–79.
  • Mirzoeff, N. (2011). The right to look: A counterhistory of visuality. Durham, NC: Duke University Press.
  • Morozov, E. (2013). Your social networking credit score. Slate.
  • Olesker, A. (2012). White paper: Big data solutions for law enforcement. CTOlabs.com.
  • Paglen, T. (2016). Invisible images (your pictures are looking at you). The New Inquiry.
  • Podesta, J., Priztker, P., Moniz, E. J., Holdren, J., & Zients, J. (2014). Big data: Seizing opportunities, preserving values. Executive Office of the President.
  • Sand, A. (2012). Visuality. Studies in Iconography, 33, 89–95.
  • Shannon, S., & Uggen, C. (2013). Visualizing punishment. The Society Pages.
  • Smith, G. (2014). Opening the black box: The work of watching. Abingdon, U.K.: Routledge.
  • Sontag, S. (1977). On Photography. London: Penguin.
  • Staniforth, A., & Akhgar, B. (2015). Harnessing the power of big data to counter international terrorism. In B. Akhgar, G. B. Saathoff, H. Arabnia, R. Hill, A. Staniforth, & P. Bayerl (Eds.), Application of big data for national security: A practitioner’s guide to emerging technologies (pp. 23–38). Oxford: Elsevier.
  • Thorp, J. (2016). Turning data around. Office for Creative Research Journal, 2, 11–24.
  • Tonelli, M. (2016). Your criminal FICO score. (Masters dissertation) Security Studies, Naval Postgraduate School.
  • Young, A. (2014). From object to encounter: Aesthetic, politics, and visual criminology. Theoretical Criminology, 18(2), 159–175.
  • Wyllie, D. (2013). How “big data” is helping law enforcement. PoliceOne.com.
  • Zoldan, A. (2013). More data, more problems: Is big data always right? Wired Magazine.