Internet-based services that build on automated algorithmic selection processes, for example search engines, computational advertising, and recommender systems, are booming and platform companies that provide such services are among the most valuable corporations worldwide. Algorithms on and beyond the Internet are increasingly influencing, aiding, or replacing human decision-making in many life domains. Their far-reaching, multifaceted economic and social impact, which results from the governance by algorithms, is widely acknowledged. However, suitable policy reactions, that is, the governance of algorithms, are the subject of controversy in academia, politics, industry, and civil society. This governance by and of algorithms is to be understood in the wider context of current technical and societal change, and in connection with other emerging trends. In particular, expanding algorithmizing of life domains is closely interrelated with and dependent on growing datafication and big data on the one hand, and rising automation and artificial intelligence in modern, digitized societies on the other. Consequently, the assessments and debates of these central developmental trends in digitized societies overlap extensively. Research on the governance by and of algorithms is highly interdisciplinary. Communication studies contributes to the formation of so-called “critical algorithms studies” with its wide set of sub-fields and approaches and by applying qualitative and quantitative methods. Its contributions focus both on the impact of algorithmic systems on traditional media, journalism, and the public sphere, and also cover effect analyses and risk assessments of algorithmic-selection applications in many domains of everyday life. The latter includes the whole range of public and private governance options to counter or reduce these risks or to safeguard ethical standards and human rights, including communication rights in a digital age.
Michael Latzer and Natascha Just
Automated journalism—the use of algorithms to translate data into narrative news content—is enabling all manner of outlets to increase efficiency while scaling up their reporting in areas as diverse as financial earnings and professional baseball. With these technological advancements, however, come serious risks. Algorithms are not good at interpreting or contextualizing complex information, and they are subject to biases and errors that ultimately could produce content that is misleading or false, even libelous. It is imperative, then, to examine how libel law might apply to automated news content that harms the reputation of a person or an organization. Conducting that examination from the perspective of U.S. law, because of its uniquely expansive constitutional protections in the area of libel, it appears that the First Amendment would cover algorithmic speech—meaning that the First Amendment’s full supply of tools and principles, and presumptions would apply to determine if particular automated news content would be protected. In the area of libel, the most significant issues come under the plaintiff’s burden to prove that the libelous content was published by the defendant (with a focus on whether automated journalism would qualify for immunity available to providers of interactive computer services) and that the content was published through the defendant’s fault (with a focus on whether an algorithm could behave with the actual malice or negligence usually required to satisfy this inquiry). There is also a significant issue under the opinion defense, which provides broad constitutional protection for statements of opinion (with a focus on whether an algorithm itself is capable of having beliefs or ideas, which generally inform an opinion).
The digital is now an integral part of everyday cultural practices globally. This ubiquity makes studying digital culture both more complex and divergent. Much of the literature on digital culture argues that it is increasingly informed by playful and ludified characteristics. In this phenomenon, there has been a rise of innovative and playful methods to explore identity politics and place-making in an age of datafication. At the core of the interdisciplinary debates underpinning the understanding of digital culture is the ways in which STEM (Science, Technology, Engineering and Mathematics) and HASS (Humanities, Arts and Social Science) approaches have played out in, and through, algorithms and datafication (e.g., the rise of small data [ethnography] to counteract big data). As digital culture becomes all-encompassing, data and its politics become central. To understand digital culture requires us to acknowledge that datafication and algorithmic cultures are now commonplace—that is, where data penetrate, invade, and analyze our daily lives, causing anxiety and seen as potentially inaccurate statistical captures. Alongside the use of big data, the quantified self (QS) movement is amplifying the need to think more about how our data stories are being told and who is doing the telling. Tensions and paradoxes ensure—power and powerless; tactic and strategic; identity and anonymity; statistics and practices; and big data and little data. The ubiquity of digital culture is explored through the lens of play and playful resistance. In the face of algorithms and datafication, the contestation around playing with data takes on important features. In sum, play becomes a series of methods or modes of critique for agency and autonomy. Playfully acting against data as a form of resistance is a key method used by artists, designers, and creative practitioners working in the digital realm, and they are not easily defined.