Internet-based services that build on automated algorithmic selection processes, for example search engines, computational advertising, and recommender systems, are booming and platform companies that provide such services are among the most valuable corporations worldwide. Algorithms on and beyond the Internet are increasingly influencing, aiding, or replacing human decision-making in many life domains. Their far-reaching, multifaceted economic and social impact, which results from the governance by algorithms, is widely acknowledged. However, suitable policy reactions, that is, the governance of algorithms, are the subject of controversy in academia, politics, industry, and civil society. This governance by and of algorithms is to be understood in the wider context of current technical and societal change, and in connection with other emerging trends. In particular, expanding algorithmizing of life domains is closely interrelated with and dependent on growing datafication and big data on the one hand, and rising automation and artificial intelligence in modern, digitized societies on the other. Consequently, the assessments and debates of these central developmental trends in digitized societies overlap extensively.
Research on the governance by and of algorithms is highly interdisciplinary. Communication studies contributes to the formation of so-called “critical algorithms studies” with its wide set of sub-fields and approaches and by applying qualitative and quantitative methods. Its contributions focus both on the impact of algorithmic systems on traditional media, journalism, and the public sphere, and also cover effect analyses and risk assessments of algorithmic-selection applications in many domains of everyday life. The latter includes the whole range of public and private governance options to counter or reduce these risks or to safeguard ethical standards and human rights, including communication rights in a digital age.
12
Article
Governance by and of Algorithms on the Internet: Impact and Consequences
Michael Latzer and Natascha Just
Article
Algorithms and Journalism
Rodrigo Zamith
Algorithms today influence, to some extent, nearly every aspect of journalism, from the initial stages of news production to the latter stages of news consumption. While they may be seen as technical objects with certain material characteristics, algorithms are also social constructions that carry multiple meanings. Algorithms are neither valueless nor do they exist in isolation; they are part of algorithmic assemblages that include myriad actors, actants, activities, and audiences. As such, they are imbued with logics that are only sometimes reflective of journalism’s.
Algorithms have played an active role in a broader quantitative turn within journalism that began in the 1970s but rapidly accelerated after the turn of the century. They are already used to produce hundreds of thousands of articles per year through automated journalism and are employed throughout the many stages of human-driven newswork. Additionally, algorithms enable audience analytics, which are used to quantify audiences into measures that are increasingly influencing news production through the abstractions they promote. Traditional theoretical models of newswork like gatekeeping are thus being challenged by the proliferation of algorithms.
A trend toward algorithmically enabled personalization is also leading to the development of responsive distribution and curated flows. This is resulting in a marked shift from journalism’s traditional focus on shared importance and toward highly individualized experiences, which has implications for the formation of publics and media effects. In particular, the proliferation of algorithms has been linked to the development of filter bubbles and evolution of algorithmic reality construction that can be gamed to spread misinformation and disinformation.
Scholars have also observed important challenges associated with the study of algorithms and in particular the opaque nature of key algorithms that govern a range of news-related processes. The combination of a lack of transparency with the complexity and adaptability of algorithmic mechanisms and systems makes it difficult to promote algorithmic accountability and to evaluate them vis-à-vis ethical models. There is, currently, no widely accepted code of ethics for the use of algorithms in journalism.
Finally, while the body of literature at the intersection of algorithms and journalism has grown rapidly in recent years, it is still in its infancy. As such, there are still ample opportunities for typologizing algorithmic phenomena, tracing the lineage of algorithmic processes and the roles of digital intermediaries within systems, and empirically evaluating the prevalence of particular kinds of algorithms in journalistic spaces and the effects they exert on newswork.
Article
Global Social Media Ethics and the Responsibility of Journalism
David A. Craig
Social media have amplified and accelerated the ethical challenges that communicators, professional and otherwise, face worldwide. The work of ethical journalism, with a priority of truthful communication, offers a paradigm case for examining the broader challenges in the global social media network. The evolution of digital technologies and the attendant expansion of the communication network pose ethical difficulties for journalists connected with increased speed and volume of information, a diminished place in the network, and the cross-border nature of information flow. These challenges are exacerbated by intentional manipulation of social media, human-run or automated, in many countries including internal suppression by authoritarian regimes and foreign influence operations to spread misinformation. In addition, structural characteristics of social media platforms’ filtering and recommending algorithms pose ethical challenges for journalism and its role in fostering public discourse on social and political issues, although a number of studies have called aspects of the “filter bubble” hypothesis into question.
Research in multiple countries, mostly in North America and Europe, has examined social media practices in journalism, including two issues central to social media ethics—verification and transparency—but ethical implications have seldom been discussed explicitly in the context of ethical theory. Since the 1980s and 1990s, scholarship focused on normative theorizing in relation to journalism has matured and become more multicultural and global. Scholars have articulated a number of ethical frameworks that could deepen analysis of the challenges of social media in the practice of journalism. However, the explicit implications of these frameworks for social media have largely gone unaddressed. A large topic of discussion in media ethics theory has been the possibility of universal or common principles globally, including a broadening of discussion of moral universals or common ground in media ethics beyond Western perspectives that have historically dominated the scholarship.
In order to advance media ethics scholarship in the 21st-century environment of globally networked communication, in which journalists work among a host of other actors (well-intentioned, ill-intentioned, and automated), it is important for researchers to apply existing media ethics frameworks to social media practices. This application needs to address the challenges that social media create when crossing cultures, the common difficulties they pose worldwide for journalistic verification practices, and the responsibility of journalists for countering misinformation from malicious actors.
It is also important to the further development of media ethics scholarship that future normative theorizing in the field—whether developing new frameworks or redeveloping current ones—consider journalistic responsibilities in relation to social media in the context of both the human and nonhuman actors in the communication network. The developing scholarly literature on the ethics of algorithms bears further attention from media ethics scholars for the ways it may provide perspectives that are complementary to existing media ethics frameworks that have focused on human actors and organizations.
Article
The Implications of School Assignment Mechanisms for Efficiency and Equity
Atila Abdulkadiroğlu
Parental choice over public schools has become a major policy tool to combat inequality in access to schools. Traditional neighborhood-based assignment is being replaced by school choice programs, broadening families’ access to schools beyond their residential location. Demand and supply in school choice programs are cleared via centralized admissions algorithms. Heterogeneous parental preferences and admissions policies create trade-offs among efficiency and equity. The data from centralized admissions algorithms can be used effectively for credible research design toward better understanding of school effectiveness, which in turn can be used for school portfolio planning and student assignment based on match quality between students and schools.
Article
Big Data and Visuality
Janet Chan
Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness.
Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.
Article
Race and Digital Discrimination
Seeta Peña Gangadharan
Race and digital discrimination is a topic of interdisciplinary interest that examines the communicative, cultural, and social dimensions of digital technologies in relation to race, racial identity, and racial inequalities, harms, or violence. Intellectual traditions in this area span vast terrain, including those that theorize identity and digitally mediated representation, those that explore social, political, and economic implications of unequal access to technological resources, and those that explore technical underpinnings of racial misidentification in digital systems. The object of inquiry thus varies from racialized interactions in digital spaces, to the nature or extent of access to high-speed broadband infrastructure, to levels of accuracy in computer automated systems. Some research orients toward policy or technical interventions to safeguard civil and human rights of individuals and groups and prevent racial discrimination in the design and use of digital technologies. Other strands of race and digital discrimination scholarship focus on diagnosing the (both recent and distant) past to excavate ways in which race itself functions as a technology.
The variety in approaches to the study of race and digital discrimination has evolved organically. Following a general concern for bias in the design, development, and use of digital technologies, scholarship in the 1990s began to center its attention on the problem of racialized discrimination in computerized, data-driven systems. In the earlier part of the 1990s, scholars writing about surveillance warned about the social, political, and economic consequences of sorting or categorizing individuals into groups. Toward the latter half of the 1990s, several scholars began scrutinizing the incorporation of specific values—and hence bias—into the computational design of technological systems, while others began looking explicitly at racialized interactions among users in virtual community and other online space. Throughout the early 2000s, scholarship—particularly in European and US contexts—race and racialization in different aspects of design, development, and use of digital technologies began to emerge. The advancement and rapid commercialization of new digital technologies—from platforms to AI—has heightened interested in race and digital discrimination alongside social movements and social upheaval in relation to problems of systemic and institutionalized racism. Scholars have also taken interest in examining the ways in which race itself functions as a technology, primarily with attention to race’s discursive power.
The study of race and digital discrimination in all its varieties will remain relevant to issues of social ordering and hierarchy. Scholarship on race and digital discrimination has been instrumental in broadening critical and cultural perspectives on technology. Its ability to expose historically and culturally specific dimensions of race and racial inequality in digital society has helped scholars question modernist assumptions of progress and universal benefit of technological development. This body of work will continue to push discussion and debate on the nature of racialized inequalities in future eras of technological innovation.
Article
Digital Posthuman Autobiography
Laurie McNeill
Since the 2010s, auto/biography studies have engaged in productive explorations of its intersections with theories of posthumanism. In unsettling concepts of the human, the agential speaking subject seen as central to autobiographical acts, posthumanism challenges core concerns of auto/biography (and humanism), including identity, agency, ethics, and relationality, and traditional expectations of auto/biographical narrative as focused on a (human) life, often singular and exceptional, chronicling a narrative of progress over time—the figure and product of the liberal humanist subject that posthumanism and autobiography studies have both critiqued. In its place, the posthuman autobiographical subject holds distributed, relativized agency as a member of a network through which it is co-constituted, a network that includes humans and non-humans in unhierarchized relations. Posthuman theories of autobiography examine how such webs of relation might shift understanding of the production and reception of an autobiographer and text.
In digital posthuman autobiography, the auto/biographer is working in multimodal ways, across platforms, shaping and shaped by the affordances of these sites, continually in the process of becoming through dynamic engagement and interaction with the rest of the network. The human-machinic interface of such digital texts and spaces illustrates the rethinking required to account for the relational, networked subjectivity and texts that are evolving within digital platforms and practices. The role of algorithms and datafication—the process through which experiences, knowledge, and lives are turned into data—as corporate, non-consensual co-authors of online auto/biographical texts particularly raises questions about the limits and agency of the human and the auto/biographical, with software not only coaxing, coercing, and coaching certain kinds of self-representation, but also, through the aggregating process of big data, creating its own versions of subjects for its own purposes. Data portraits, data mining, and data doubles are representations based on auto/biographical source texts, but not ones the original subject or their communities have imagined for themselves. However, the affordances and collaborations created by participation in the digital web also foster a networked agency through which individuals-in-relation can testify to and document experience in collective ways, working within and beyond the norms imagined by the corporate and machinic. The potential for posthuman testimony and the proliferation of autobiographical moments or “small data” suggest the potential of digital autobiographical practices to articulate what it means to be a human-in-relation, to be alive in a network.
Article
Lives and Data
Elizabeth Rodrigues
Although frequently associated with the digital era, data is an epistemological concept and representational form that has intersected with the narration of lives for centuries. With the rise of Baconian empiricism, methods of collecting discrete observations became the predominant way of knowing the physical world in Western epistemology. Exhaustive data collection came to be seen as the precursor to ultimate knowledge, theorized to have the potential to reveal predictive patterns without the intervention of human theory. Lives came to be seen as potential data collections, on the individual and the social level. As individuals have come to see value in collecting the data of their own lives, practices of observing and recording the self that characterize spiritual and diaristic practices have been inflected by a secular epistemology of data emphasizing exhaustivity in collection and self-improvement goals aimed at personal wellness and economic productivity. At the social level, collecting data about human lives has become the focus of a range of academic disciplines, governmental structures, and corporate business models.
Nineteenth-century social sciences turned toward data collection as a method of explanation and prediction in earnest, and these methods were especially likely to be focused on the lives of minoritized populations. Theories of racial identity and difference emerging from such studies drew on the rhetoric of data as unbiased to enshrine white supremacist logic and law. The tendency to use data to categorize and thereby direct human lives has continued and manifests in 21st-century practices of algorithmic identification. At both the individual and social scales of collection, though, data holds the formal and epistemological potential to challenge narrative singularity by bringing the internal heterogeneity of any individual life or population into view. Yet it is often used to argue for singular revelation, the assignment of particular narratives to particular lives. Throughout the long history of representing lives as data in Western contexts, life writers have engaged with data conceptually and aesthetically in multiple ways: experimenting with its potential for revelation, critiquing its abstraction and totalization, developing data collection projects that are embodied and situated, using data to develop knowledge in service of oppressed communities, calling attention to data’s economic and political power, and asserting the narrative multiplicity and interpretive agency inherent in the telling of lives.
Article
Close Reading
Mark Byron
Close reading describes a set of procedures and methods that distinguishes the scholarly apprehension of textual material from the more prosaic reading practices of everyday life. Its origins and ancestry are rooted in the exegetical traditions of sacred texts (principally from the Hindu, Jewish, Buddhist, Christian, Zoroastrian, and Islamic traditions) as well as the philological strategies applied to classical works such as the Homeric epics in the Greco-Roman tradition, or the Chinese 詩經 (Shijing) or Classic of Poetry. Cognate traditions of exegesis and commentary formed around Roman law and the canon law of the Christian Church, and they also find expression in the long tradition of Chinese historical commentaries and exegeses on the Five Classics and Four Books. As these practices developed in the West, they were adapted to medieval and early modern literary texts from which the early manifestations of modern secular literary analysis came into being in European and American universities. Close reading comprises the methodologies at the center of literary scholarship as it developed in the modern academy over the past one hundred years or so, and has come to define a central set of practices that dominated scholarly work in English departments until the turn to literary and critical theory in the late 1960s. This article provides an overview of these dominant forms of close reading in the modern Western academy. The focus rests upon close reading practices and their codification in English departments, although reference is made to non-Western reading practices and philological traditions, as well as to significant nonanglophone alternatives to the common understanding of literary close reading.
Article
Work Design in the Contemporary Era
Caroline Knight, Sabreen Kaur, and Sharon K. Parker
Work design refers to the roles, responsibilities, and work tasks that comprise an individual’s job and how they are structured and organized. Good work design is created by jobs high in characteristics such as autonomy, social support, and feedback, and moderate in job demands such as workload, role ambiguity, and role conflict. Established research shows good work design is associated with work outcomes such as job satisfaction, organizational commitment, work safety, and job performance. Poor work design is characterized by roles that are low in job resources and/or overly high in job demands, and has been linked to poor health and well-being, absenteeism, and poor performance. Work design in the 20th century was characterized by traditional theories focusing on work motivation, well-being, and performance. Motivational and stress theories of work design were later integrated, and work characteristics were expanded to include a whole variety of task, knowledge, social, and work-context characteristics as well as demands, better reflecting contemporary jobs. In the early 21st century, relational theories flourished, focusing on the social and prosocial aspects of work. The role of work design on learning and cognition was also recognized, with benefits for creativity and performance.
Work design is affected by many factors, including individual traits, organizational factors, national factors, and global factors. Managers may impact employees’ work design “top-down” by changing policies and procedures, while individuals may change their own work design “bottom-up” through “job crafting.”
In the contemporary era, technology and societal factors play an important role in how work is changing. Information and communication technology has enabled remote working and collaboration across time and space, with positive implications for efficiency and flexibility, but potentially also increasing close monitoring and isolation. Automation has led to daily interaction with technologies like robots, algorithms, and artificial intelligence, which can influence autonomy, job complexity, social interaction, and job demands in different ways, ultimately impacting how motivating jobs are.
Given the rapidly changing nature of work, it is critical that managers and organizations adopt a human-centered approach to designing work, with managers sensitive to the positive and negative implications of contemporary work on employees’ work design, well-being, and performance. Further research is needed to understand the multitude of multilevel factors influencing work design, how work can be redesigned to optimize technology and worker motivation, and the shorter- and longer-term processes linking work design to under-researched outcomes like identity, cognition, and learning. Overall, the aim is to create high-quality contemporary work in which all individuals can thrive.
Article
How Libel Law Applies to Automated Journalism
Jonathan Peters
Automated journalism—the use of algorithms to translate data into narrative news content—is enabling all manner of outlets to increase efficiency while scaling up their reporting in areas as diverse as financial earnings and professional baseball. With these technological advancements, however, come serious risks. Algorithms are not good at interpreting or contextualizing complex information, and they are subject to biases and errors that ultimately could produce content that is misleading or false, even libelous. It is imperative, then, to examine how libel law might apply to automated news content that harms the reputation of a person or an organization.
Conducting that examination from the perspective of U.S. law, because of its uniquely expansive constitutional protections in the area of libel, it appears that the First Amendment would cover algorithmic speech—meaning that the First Amendment’s full supply of tools and principles, and presumptions would apply to determine if particular automated news content would be protected. In the area of libel, the most significant issues come under the plaintiff’s burden to prove that the libelous content was published by the defendant (with a focus on whether automated journalism would qualify for immunity available to providers of interactive computer services) and that the content was published through the defendant’s fault (with a focus on whether an algorithm could behave with the actual malice or negligence usually required to satisfy this inquiry). There is also a significant issue under the opinion defense, which provides broad constitutional protection for statements of opinion (with a focus on whether an algorithm itself is capable of having beliefs or ideas, which generally inform an opinion).
Article
Mobile Applications and Journalistic Work
Allison J. Steinke and Valerie Belair-Gagnon
In the early 2000s, along with the emergence of social media in journalism, mobile chat applications began to gain significant footing in journalistic work. Interdisciplinary research, particularly in journalism studies, has started to look at apps in journalistic work from producer and user perspectives. Still in its infancy, scholarly research on apps and journalistic work reflects larger trends explored in digital journalism studies, while expanding the understanding of mobile news.
Article
Adiabatic Quantum Computing and Quantum Annealing
Erica K. Grant and Travis S. Humble
Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems.
The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.
Article
Queer Music Practices in the Digital Realm
Ben De Smet
The relevance of music in lesbian, gay, bisexual, transgender, queer, and other sexually nonnormative (LGBTQ+) lives and identities has been extensively established and researched. Studies have focused on queer performances, fandom, night life, and other aspects of music to examine the intimate, social, and political relations between music and LGBTQ+ identities. In a time where music culture is produced, distributed, and consumed increasingly in digital spaces, relations between music and LGBTQ+ identities are meaningfully informed by these spaces. As this is a relatively recent development and offline music practices remain profoundly meaningful and relevant, the amount of research on queer digital music practices remains modest. However, a rich body of literature in the fields of popular music studies, queer studies, and new media studies provides an array of inspiring angles and perspectives to shed light on these matters, and this literature can be situated and critically linked.
For over half a century, popular music studies have directed their attention to the relations between the social and the musical. Under the impulse of feminist studies, gender identities soon became a prominent focus within popular music studies, and, driven by LGBTQ+ studies, (non-normative) sexual identities soon followed. As popular music studies developed a rich theoretical basis, and feminist and queer studies evolved over the years into more intersectional and queer directions, popular music studies focusing on gender and/or sexuality gradually stepped away from their initial somewhat rigid, binary perspectives in favor of more open, dynamic, and queer perspectives.
Following a similar path, early new media studies struggled to avoid simplistic, naïve, or gloomy deterministic analyses of the Internet and new media. As the field evolved, alongside the technologies that form its focus, a more nuanced, mutual, and agency-based approach emerged. Here, too, scholars have introduced queer perspectives and have applied them to research a range of LGBTQ+-related digital phenomena.
Today, popular music, sexual identities, and new media have become meaningful aspects of social life, and much more remains to be explored, in particular on the intersection of these fields. A diverse array of queer fan practices, music video practices, and music streaming practices are waiting to be examined. The theory and the tools are there.
Article
Information Processing and Digitalization in Bureaucracies
Tero Erkkilä
Bureaucracies and their processing of information have evolved along with the formation of states, from absolutist to welfare state and beyond. Digitalization has both reflected and expedited these changes, but it is important to keep in mind that digital-era governance is also conditioned by existing information resources as well as institutional practices and administrative culture. To understand the digital transformations of states, one needs to engage in contextual analysis of the actual changes that might show even paradoxical and unintended effects. Initially, the studies on the effects of information systems on bureaucracies focused on single organizations. But the focus has since shifted toward digitally enhanced interaction with the society in terms of service provision, responsiveness, participatory governance, and deliberation, as well as economic exploitation of public data. Indeed, the history of digitalization in bureaucracies also reads as an account of its opening. But there are also contradictory developments concerning the use of big data, learning systems, and digital surveillance technologies that have created new confidential or secretive domains of information processing in bureaucracies. Another pressing topic is automation of decision making, which can range from rules-based decisions to learning systems. This has created new demands for control, both in terms of citizen information rights as well as accountability systems. While one should be cautious about claims of revolutionary changes, the increasing tempo and interconnectedness characterizing digitalization of bureaucratic activities pose major challenges for public accountability. The historical roots of state information are important in understanding changes of information processing in public administration through digitalization, highlighting the transformations of states and new stakeholders and forms of collaboration, as well as the emerging questions of accountability. But instead of readily assuming structural changes, one should engage in contextualized analysis of the actual effects of digitalization to fully understand them.
Article
Quasi Maximum Likelihood Estimation of High-Dimensional Factor Models
Matteo Barigozzi
Factor models are some of the most common dimension reduction techniques in time series econometrics. They are based on the idea that each element of a set of N time series is made of a common component driven by few latent factors capturing the main comovements among the series, plus idiosyncratic components often representing just measurement error or at most being weakly cross-sectionally correlated with the other idiosyncratic components. When N is large the factors can be retrieved by cross-sectional aggregation of the observed time series. This is the so-called blessing of dimensionality, meaning that having N growing to infinity poses no estimation problem but in fact is a necessary condition for consistent estimation of the factors and for identification of the common and idiosyncratic components.
There exist two main ways to estimate a factor model: principal component analysis and maximum likelihood estimation. The former method is more recent and more common in econometrics, but the latter, which is the classical approach, has many appealing features such as allowing one to impose constraints, deal with missing values, and explicitly model the dynamic of the factors. Maximum likelihood estimation of large factor models has been studied in two influential papers: Doz et al.’s “A Quasi Maximum Likelihood Approach for Large Approximate Dynamic Factor Models” and Bai and Li’s “Maximum Likelihood Estimation and Inference for Approximate Factor Models of High Dimension.” The latter considers the static case, which is closer to the classical approach and no model for the factors is assumed, and the former is more general: it considers estimation combined with the use of Kalman filtering techniques, which has grown popular in macroeconomic policy analysis.
Those two papers, together with other recent results, have brought new asymptotic results for which a synthesis is provided. Special attention is paid to the set of assumptions, which is taken to be the minimal set of assumptions required to get the results.
Article
Global Political Economy, Platforms, and Media Industries
Dal Yong Jin
Critical political economy has emphasized the tensions and power relations between global forces and local forces as well as the political and the economic. Since media ownership has become one of the major elements to widening the existing gaps between a few powerful actors and the majority of underprivileged players, critical political economy focuses on the significant role of ownership in media and communication studies. Critical political economy has also continued to emphasize the structural change in media industries in the broader socio-economic milieu. In the early 21st century, critical political economy has shifted its emphasis toward digital platforms, such as over-the-top service platforms like Netflix, social media platforms like YouTube, and search engines like Google, as these digital platforms supported by artificial intelligence algorithms and big data are primary actors in the global cultural industries. They are not only shifting the milieu surrounding cultural industries but also transforming the entire value chain in cultural production, including the production of popular culture, the circulation of cultural products, and the consumption of cultural content. Critical political economy needs to analyze power relations between platform owners and platform users as well as between a few countries in the Global North that possess these digital platforms and the majority of countries in the Global South that, owing to the lack of capital, manpower, and know-how, cannot advance their own platforms. This implies that critical political economy needs to analyze how global digital platforms as part of Western cultural industries have controlled and manipulated local cultural industries. By discussing the change and continuity in the cultural industries in the digital media–driven media environment, which expedites the concentration of the industry, new international division of labor, and platform imperialism practice, critical political economy will shed light on the existing debates about power relations within the broader society.
Article
Digital Cultures and Critical Studies
Larissa Hjorth
The digital is now an integral part of everyday cultural practices globally. This ubiquity makes studying digital culture both more complex and divergent. Much of the literature on digital culture argues that it is increasingly informed by playful and ludified characteristics. In this phenomenon, there has been a rise of innovative and playful methods to explore identity politics and place-making in an age of datafication.
At the core of the interdisciplinary debates underpinning the understanding of digital culture is the ways in which STEM (Science, Technology, Engineering and Mathematics) and HASS (Humanities, Arts and Social Science) approaches have played out in, and through, algorithms and datafication (e.g., the rise of small data [ethnography] to counteract big data). As digital culture becomes all-encompassing, data and its politics become central. To understand digital culture requires us to acknowledge that datafication and algorithmic cultures are now commonplace—that is, where data penetrate, invade, and analyze our daily lives, causing anxiety and seen as potentially inaccurate statistical captures.
Alongside the use of big data, the quantified self (QS) movement is amplifying the need to think more about how our data stories are being told and who is doing the telling. Tensions and paradoxes ensure—power and powerless; tactic and strategic; identity and anonymity; statistics and practices; and big data and little data. The ubiquity of digital culture is explored through the lens of play and playful resistance. In the face of algorithms and datafication, the contestation around playing with data takes on important features. In sum, play becomes a series of methods or modes of critique for agency and autonomy. Playfully acting against data as a form of resistance is a key method used by artists, designers, and creative practitioners working in the digital realm, and they are not easily defined.
Article
Analog Models for Empirical-Statistical Downscaling
María Laura Bettolli
Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution.
Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions.
Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day.
The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.
Article
Design of Discrete Choice Experiments
Deborah J. Street and Rosalie Viney
Discrete choice experiments are a popular stated preference tool in health economics and have been used to address policy questions, establish consumer preferences for health and healthcare, and value health states, among other applications. They are particularly useful when revealed preference data are not available. Most commonly in choice experiments respondents are presented with a situation in which a choice must be made and with a a set of possible options. The options are described by a number of attributes, each of which takes a particular level for each option. The set of possible options is called a “choice set,” and a set of choice sets comprises the choice experiment. The attributes and levels are chosen by the analyst to allow modeling of the underlying preferences of respondents. Respondents are assumed to make utility-maximizing decisions, and the goal of the choice experiment is to estimate how the attribute levels affect the utility of the individual. Utility is assumed to have a systematic component (related to the attributes and levels) and a random component (which may relate to unobserved determinants of utility, individual characteristics or random variation in choices), and an assumption must be made about the distribution of the random component. The structure of the set of choice sets, from the universe of possible choice sets represented by the attributes and levels, that is shown to respondents determines which models can be fitted to the observed choice data and how accurately the effect of the attribute levels can be estimated. Important structural issues include the number of options in each choice set and whether or not options in the same choice set have common attribute levels. Two broad approaches to constructing the set of choice sets that make up a DCE exist—theoretical and algorithmic—and no consensus exists about which approach consistently delivers better designs, although simulation studies and in-field comparisons of designs constructed by both approaches exist.
12