Internet-based services that build on automated algorithmic selection processes, for example search engines, computational advertising, and recommender systems, are booming and platform companies that provide such services are among the most valuable corporations worldwide. Algorithms on and beyond the Internet are increasingly influencing, aiding, or replacing human decision-making in many life domains. Their far-reaching, multifaceted economic and social impact, which results from the governance by algorithms, is widely acknowledged. However, suitable policy reactions, that is, the governance of algorithms, are the subject of controversy in academia, politics, industry, and civil society. This governance by and of algorithms is to be understood in the wider context of current technical and societal change, and in connection with other emerging trends. In particular, expanding algorithmizing of life domains is closely interrelated with and dependent on growing datafication and big data on the one hand, and rising automation and artificial intelligence in modern, digitized societies on the other. Consequently, the assessments and debates of these central developmental trends in digitized societies overlap extensively. Research on the governance by and of algorithms is highly interdisciplinary. Communication studies contributes to the formation of so-called “critical algorithms studies” with its wide set of sub-fields and approaches and by applying qualitative and quantitative methods. Its contributions focus both on the impact of algorithmic systems on traditional media, journalism, and the public sphere, and also cover effect analyses and risk assessments of algorithmic-selection applications in many domains of everyday life. The latter includes the whole range of public and private governance options to counter or reduce these risks or to safeguard ethical standards and human rights, including communication rights in a digital age.
Michael Latzer and Natascha Just
Algorithms today influence, to some extent, nearly every aspect of journalism, from the initial stages of news production to the latter stages of news consumption. While they may be seen as technical objects with certain material characteristics, algorithms are also social constructions that carry multiple meanings. Algorithms are neither valueless nor do they exist in isolation; they are part of algorithmic assemblages that include myriad actors, actants, activities, and audiences. As such, they are imbued with logics that are only sometimes reflective of journalism’s. Algorithms have played an active role in a broader quantitative turn within journalism that began in the 1970s but rapidly accelerated after the turn of the century. They are already used to produce hundreds of thousands of articles per year through automated journalism and are employed throughout the many stages of human-driven newswork. Additionally, algorithms enable audience analytics, which are used to quantify audiences into measures that are increasingly influencing news production through the abstractions they promote. Traditional theoretical models of newswork like gatekeeping are thus being challenged by the proliferation of algorithms. A trend toward algorithmically enabled personalization is also leading to the development of responsive distribution and curated flows. This is resulting in a marked shift from journalism’s traditional focus on shared importance and toward highly individualized experiences, which has implications for the formation of publics and media effects. In particular, the proliferation of algorithms has been linked to the development of filter bubbles and evolution of algorithmic reality construction that can be gamed to spread misinformation and disinformation. Scholars have also observed important challenges associated with the study of algorithms and in particular the opaque nature of key algorithms that govern a range of news-related processes. The combination of a lack of transparency with the complexity and adaptability of algorithmic mechanisms and systems makes it difficult to promote algorithmic accountability and to evaluate them vis-à-vis ethical models. There is, currently, no widely accepted code of ethics for the use of algorithms in journalism. Finally, while the body of literature at the intersection of algorithms and journalism has grown rapidly in recent years, it is still in its infancy. As such, there are still ample opportunities for typologizing algorithmic phenomena, tracing the lineage of algorithmic processes and the roles of digital intermediaries within systems, and empirically evaluating the prevalence of particular kinds of algorithms in journalistic spaces and the effects they exert on newswork.
Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness. Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.
Allison J. Steinke and Valerie Belair-Gagnon
In the early 2000s, along with the emergence of social media in journalism, mobile chat applications began to gain significant footing in journalistic work. Interdisciplinary research, particularly in journalism studies, has started to look at apps in journalistic work from producer and user perspectives. Still in its infancy, scholarly research on apps and journalistic work reflects larger trends explored in digital journalism studies, while expanding the understanding of mobile news.
Erica K. Grant and Travis S. Humble
Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches. Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity. A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems. The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.
Daniel Currie Hall
The fundamental idea underlying the use of distinctive features in phonology is the proposition that the same phonetic properties that distinguish one phoneme from another also play a crucial role in accounting for phonological patterns. Phonological rules and constraints apply to natural classes of segments, expressed in terms of features, and involve mechanisms, such as spreading or agreement, that copy distinctive features from one segment to another. Contrastive specification builds on this by taking seriously the idea that phonological features are distinctive features. Many phonological patterns appear to be sensitive only to properties that crucially distinguish one phoneme from another, ignoring the same properties when they are redundant or predictable. For example, processes of voicing assimilation in many languages apply only to the class of obstruents, where voicing distinguishes phonemic pairs such as /t/ and /d/, and ignore sonorant consonants and vowels, which are predictably voiced. In theories of contrastive specification, features that do not serve to mark phonemic contrasts (such as [+voice] on sonorants) are omitted from underlying representations. Their phonological inertness thus follows straightforwardly from the fact that they are not present in the phonological system at the point at which the pattern applies, though the redundant features may subsequently be filled in either before or during phonetic implementation. In order to implement a theory of contrastive specification, it is necessary to have a means of determining which features are contrastive (and should thus be specified) and which ones are redundant (and should thus be omitted). A traditional and intuitive method involves looking for minimal pairs of phonemes: if [±voice] is the only property that can distinguish /t/ from /d/, then it must be specified on them. This approach, however, often identifies too few contrastive features to distinguish the phonemes of an inventory, particularly when the phonetic space is sparsely populated. For example, in the common three-vowel inventory /i a u/, there is more than one property that could distinguish any two vowels: /i/ differs from /a/ in both place (front versus back or central) and height (high versus low), /a/ from /u/ in both height and rounding, and /u/ from /i/ in both rounding and place. Because pairwise comparison cannot identify any features as contrastive in such cases, much recent work in contrastive specification is instead based on a hierarchical sequencing of features, with specifications assigned by dividing the full inventory into successively smaller subsets. For example, if the inventory /i a u/ is first divided according to height, then /a/ is fully distinguished from the other two vowels by virtue of being low, and the second feature, either place or rounding, is contrastive only on the high vowels. Unlike pairwise comparison, this approach produces specifications that fully distinguish the members of the underlying inventory, while at the same time allowing for the possibility of cross-linguistic variation in the specifications assigned to similar inventories.
The digital is now an integral part of everyday cultural practices globally. This ubiquity makes studying digital culture both more complex and divergent. Much of the literature on digital culture argues that it is increasingly informed by playful and ludified characteristics. In this phenomenon, there has been a rise of innovative and playful methods to explore identity politics and place-making in an age of datafication. At the core of the interdisciplinary debates underpinning the understanding of digital culture is the ways in which STEM (Science, Technology, Engineering and Mathematics) and HASS (Humanities, Arts and Social Science) approaches have played out in, and through, algorithms and datafication (e.g., the rise of small data [ethnography] to counteract big data). As digital culture becomes all-encompassing, data and its politics become central. To understand digital culture requires us to acknowledge that datafication and algorithmic cultures are now commonplace—that is, where data penetrate, invade, and analyze our daily lives, causing anxiety and seen as potentially inaccurate statistical captures. Alongside the use of big data, the quantified self (QS) movement is amplifying the need to think more about how our data stories are being told and who is doing the telling. Tensions and paradoxes ensure—power and powerless; tactic and strategic; identity and anonymity; statistics and practices; and big data and little data. The ubiquity of digital culture is explored through the lens of play and playful resistance. In the face of algorithms and datafication, the contestation around playing with data takes on important features. In sum, play becomes a series of methods or modes of critique for agency and autonomy. Playfully acting against data as a form of resistance is a key method used by artists, designers, and creative practitioners working in the digital realm, and they are not easily defined.
Deborah J. Street and Rosalie Viney
Discrete choice experiments are a popular stated preference tool in health economics and have been used to address policy questions, establish consumer preferences for health and healthcare, and value health states, among other applications. They are particularly useful when revealed preference data are not available. Most commonly in choice experiments respondents are presented with a situation in which a choice must be made and with a a set of possible options. The options are described by a number of attributes, each of which takes a particular level for each option. The set of possible options is called a “choice set,” and a set of choice sets comprises the choice experiment. The attributes and levels are chosen by the analyst to allow modeling of the underlying preferences of respondents. Respondents are assumed to make utility-maximizing decisions, and the goal of the choice experiment is to estimate how the attribute levels affect the utility of the individual. Utility is assumed to have a systematic component (related to the attributes and levels) and a random component (which may relate to unobserved determinants of utility, individual characteristics or random variation in choices), and an assumption must be made about the distribution of the random component. The structure of the set of choice sets, from the universe of possible choice sets represented by the attributes and levels, that is shown to respondents determines which models can be fitted to the observed choice data and how accurately the effect of the attribute levels can be estimated. Important structural issues include the number of options in each choice set and whether or not options in the same choice set have common attribute levels. Two broad approaches to constructing the set of choice sets that make up a DCE exist—theoretical and algorithmic—and no consensus exists about which approach consistently delivers better designs, although simulation studies and in-field comparisons of designs constructed by both approaches exist.
María Laura Bettolli
Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution. Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions. Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day. The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.
Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology. The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.