1-2 of 2 Results  for:

  • Keywords: algorithms x
  • Media and Communication Policy x
Clear all

Article

Internet-based services that build on automated algorithmic selection processes, for example search engines, computational advertising, and recommender systems, are booming and platform companies that provide such services are among the most valuable corporations worldwide. Algorithms on and beyond the Internet are increasingly influencing, aiding, or replacing human decision-making in many life domains. Their far-reaching, multifaceted economic and social impact, which results from the governance by algorithms, is widely acknowledged. However, suitable policy reactions, that is, the governance of algorithms, are the subject of controversy in academia, politics, industry, and civil society. This governance by and of algorithms is to be understood in the wider context of current technical and societal change, and in connection with other emerging trends. In particular, expanding algorithmizing of life domains is closely interrelated with and dependent on growing datafication and big data on the one hand, and rising automation and artificial intelligence in modern, digitized societies on the other. Consequently, the assessments and debates of these central developmental trends in digitized societies overlap extensively. Research on the governance by and of algorithms is highly interdisciplinary. Communication studies contributes to the formation of so-called “critical algorithms studies” with its wide set of sub-fields and approaches and by applying qualitative and quantitative methods. Its contributions focus both on the impact of algorithmic systems on traditional media, journalism, and the public sphere, and also cover effect analyses and risk assessments of algorithmic-selection applications in many domains of everyday life. The latter includes the whole range of public and private governance options to counter or reduce these risks or to safeguard ethical standards and human rights, including communication rights in a digital age.

Article

Automated journalism—the use of algorithms to translate data into narrative news content—is enabling all manner of outlets to increase efficiency while scaling up their reporting in areas as diverse as financial earnings and professional baseball. With these technological advancements, however, come serious risks. Algorithms are not good at interpreting or contextualizing complex information, and they are subject to biases and errors that ultimately could produce content that is misleading or false, even libelous. It is imperative, then, to examine how libel law might apply to automated news content that harms the reputation of a person or an organization. Conducting that examination from the perspective of U.S. law, because of its uniquely expansive constitutional protections in the area of libel, it appears that the First Amendment would cover algorithmic speech—meaning that the First Amendment’s full supply of tools and principles, and presumptions would apply to determine if particular automated news content would be protected. In the area of libel, the most significant issues come under the plaintiff’s burden to prove that the libelous content was published by the defendant (with a focus on whether automated journalism would qualify for immunity available to providers of interactive computer services) and that the content was published through the defendant’s fault (with a focus on whether an algorithm could behave with the actual malice or negligence usually required to satisfy this inquiry). There is also a significant issue under the opinion defense, which provides broad constitutional protection for statements of opinion (with a focus on whether an algorithm itself is capable of having beliefs or ideas, which generally inform an opinion).