1-20 of 22 Results  for:

  • Keywords: big data x
Clear all

Article

Big Data and Visuality  

Janet Chan

Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness. Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.

Article

Predictive Analytics and Big Data  

Oscar E. Cariceo, Murali Nair, and Wahaj Bokhari

Predictive analytics is a set of techniques and an advanced methodological and research approach that seeks to reach conclusions about the future, rather than explanations of specific issues or phenomena. The fast growth in popularity and application of data science to different businesses and activities, such as human services and nonprofit management, are related to the emergence and consolidation of big data. In terms of digital networking, the availability of data produced by individuals every day is enormous. Tools and techniques such as machine learning, deep learning, visualization, time series analysis, networking analysis, natural language processing, and text mining may help support evidence-based practice for social workers. Predictive analytics and big data offer an opportunity to enhance innovative social change and people’s well-being.

Article

Selective Avoidance and Exposure  

Kevin Arceneaux and Martin Johnson

Since the mid-20th century, communication researchers have recognized that audience members selectively expose themselves to information and opinions congenial to their pre-existing views. While this was a controversial idea during the broadcast era of mass media, the expansion of media choice on television and use of information communication technology has brought increased attention to selectivity among audience members. Contemporary scholarship investigates the extent to which people select proattitudinal information or avoid counterattitudinal information and the role these choices play in the effects of media messages on viewers. While selective exposure is a broader phenomenon, this article substantively focuses on the use of politically partisan media, especially the research methods used to investigate media selectivity and its effects. This literature manifests an increased attention to measurement, especially how we measure the core concept of media exposure, novel experimental designs intended to allow investigators to directly view individual choice behavior in complex media environments, and attention to new sources of large-scale data from social media and large text samples. Scholars agree that partisan websites and cable networks provide content politically distinct enough to allow viewers to segregate themselves into liberal and conservative audiences for news but that this kind of polarized viewing is only part of how viewers use media today. A nuanced picture of selectivity shows audiences selecting congenial content but employing broader media use repertoires as well. The mechanisms and effects of media selectivity are psychologically complex and sensitive to contextual factors such as the political issue under consideration.

Article

From Decision Support to Analytics  

Ciara Heavin and Frederic Adam

Since the 1960s, information technology (IT)/information systems (IS) professionals, data practitioners, and senior managers have focused on developing decision support capabilities to enhance organizational decision making. Initially, this quest was mostly driven by successive generations of technological advances. However, in the last decade, the pace at which large volumes of diverse data can be collected and processed, new algorithmic advances, and the development of computational infrastructure such as graphics processing units (GPUs) and tensor processing units (TPUs) have created new opportunities for global businesses in areas such as financial services, manufacturing, retail, sports, and healthcare. At this point, it seems that most industries and public services could potentially be revolutionized by these new techniques. The word analytics has replaced the previous individual components of computerized decision support technologies that have been developed under various labels in the past (). Much of the traditional researcher and practitioner communities who were concerned with decision support, decision support systems (DSSs), and business intelligence (BI) have reoriented their attention to innovative tools and technologies to derive value from new data streams through artificial intelligence (AI) and analytics. Identifying the main areas of focus for decision support and analytics provides a stimulus for new ideas for researchers, managers, and IS/IT and data professionals. These stakeholders need to undertake new empirical studies that explain how analytics can be used to develop and enhance new forms of decision support while considering the dilemmas that may arise due to the data capture and analyses of new digital data streams.

Article

Artificial Intelligence and Entrepreneurship Research  

Martin Obschonka and Christian Fisch

Advances in Artificial Intelligence (AI) are intensively shaping businesses and the economy as a whole, and AI-related research is exploding in many domains of business and management research. In contrast, AI has received relatively little attention within the domain of entrepreneurship research, while many entrepreneurship scholars agree that AI will likely shape entrepreneurship research in deep, disruptive ways. When summarizing both the existing entrepreneurship literature on AI and potential avenues for future research, the growing relevance of AI for entrepreneurship research manifests itself along two dimensions. First, AI applications in the real world establish a distinct research topic (e.g., whether and how entrepreneurs and entrepreneurial ventures use and develop AI-based technologies, or how AI can function as an external enabler that generates and enhances entrepreneurial outcomes). In other words, AI is changing the research object in entrepreneurship research. The second dimension refers to drawing on AI-based research methods, such as big data techniques or AI-based forecasting methods. Such AI-based methods open several avenues for researchers to gain new, influential insights into entrepreneurs and entrepreneurial ventures that are more difficult to assess using traditional methods. In other words, AI is changing the research methods. Given that, so far, human intelligence could not fully uncover and comprehend the secrets behind the entrepreneurial process that is so deeply embedded in uncertainty and opportunity, AI-supported research methods might achieve new breakthrough discoveries. We conclude that the field needs to embrace AI as a topic and research method more enthusiastically while maintaining the essential research standards and scientific rigor that guarantee the field’s well-being, reputation, and impact.

Article

Big Data and Urban Health  

Mark Stevenson, Jason Thompson, and Thanh Ho

Understanding the varied effects of urban environments on our health have arisen through centuries of observation and analysis. Various units of observation, when compiled spatially or linearly, have provided considerable understanding of the causal pathways between environmental exposures in cities and associated mortality and morbidity. With growing urban agglomerations and a digital age providing timely and standardized data, unique insights are being provided that further enhance the understanding of urban health. No longer is there a potential lack of urban data; over the 2010–2020 decade alone, the resolution and standardization of satellite and street imagery, for example, alongside methods of artificial intelligence such as self-supervision methods, have meant that technology and its capacity have surpassed the accuracy and resolution of many administrative data collections typically used for urban health research. From Bills of Mortality in 1665 to 20th century surveillance systems to the innovation and global reach in the period of “big data,” data has been the mainstay of decision support systems over the centuries. This new world of big data characterized by volume, velocity, variety, veracity, variability, volatility, and value is paramount to answering the significant urban health challenges of the 21st century.

Article

Information Processing and Digitalization in Bureaucracies  

Tero Erkkilä

Bureaucracies and their processing of information have evolved along with the formation of states, from absolutist to welfare state and beyond. Digitalization has both reflected and expedited these changes, but it is important to keep in mind that digital-era governance is also conditioned by existing information resources as well as institutional practices and administrative culture. To understand the digital transformations of states, one needs to engage in contextual analysis of the actual changes that might show even paradoxical and unintended effects. Initially, the studies on the effects of information systems on bureaucracies focused on single organizations. But the focus has since shifted toward digitally enhanced interaction with the society in terms of service provision, responsiveness, participatory governance, and deliberation, as well as economic exploitation of public data. Indeed, the history of digitalization in bureaucracies also reads as an account of its opening. But there are also contradictory developments concerning the use of big data, learning systems, and digital surveillance technologies that have created new confidential or secretive domains of information processing in bureaucracies. Another pressing topic is automation of decision making, which can range from rules-based decisions to learning systems. This has created new demands for control, both in terms of citizen information rights as well as accountability systems. While one should be cautious about claims of revolutionary changes, the increasing tempo and interconnectedness characterizing digitalization of bureaucratic activities pose major challenges for public accountability. The historical roots of state information are important in understanding changes of information processing in public administration through digitalization, highlighting the transformations of states and new stakeholders and forms of collaboration, as well as the emerging questions of accountability. But instead of readily assuming structural changes, one should engage in contextualized analysis of the actual effects of digitalization to fully understand them.

Article

Big Data and the Study of Communities and Crime  

Daniel T. O'Brien

In recent years, a variety of novel digital data sources, colloquially referred to as “big data,” have taken the popular imagination by storm. These data sources include, but are not limited to, digitized administrative records, activity on and contents of social media and internet platforms, and readings from sensors that track physical and environmental conditions. Some have argued that such data sets have the potential to transform our understanding of human behavior and society, constituting a meta-field known as computational social science. Criminology and criminal justice are no exception to this excitement. Although researchers in these areas have long used administrative records, in recent years they have increasingly looked to the most recent versions of these data, as well as other novel resources, to pursue new questions and tools.

Article

Big Data’s Role in Health and Risk Messaging  

Bradford William Hesse

The presence of large-scale data systems can be felt, consciously or not, in almost every facet of modern life, whether through the simple act of selecting travel options online, purchasing products from online retailers, or navigating through the streets of an unfamiliar neighborhood using global positioning system (GPS) mapping. These systems operate through the momentum of big data, a term introduced by data scientists to describe a data-rich environment enabled by a superconvergence of advanced computer-processing speeds and storage capacities; advanced connectivity between people and devices through the Internet; the ubiquity of smart, mobile devices and wireless sensors; and the creation of accelerated data flows among systems in the global economy. Some researchers have suggested that big data represents the so-called fourth paradigm in science, wherein the first paradigm was marked by the evolution of the experimental method, the second was brought about by the maturation of theory, the third was marked by an evolution of statistical methodology as enabled by computational technology, while the fourth extended the benefits of the first three, but also enabled the application of novel machine-learning approaches to an evidence stream that exists in high volume, high velocity, high variety, and differing levels of veracity. In public health and medicine, the emergence of big data capabilities has followed naturally from the expansion of data streams from genome sequencing, protein identification, environmental surveillance, and passive patient sensing. In 2001, the National Committee on Vital and Health Statistics published a road map for connecting these evidence streams to each other through a national health information infrastructure. Since then, the road map has spurred national investments in electronic health records (EHRs) and motivated the integration of public surveillance data into analytic platforms for health situational awareness. More recently, the boom in consumer-oriented mobile applications and wireless medical sensing devices has opened up the possibility for mining new data flows directly from altruistic patients. In the broader public communication sphere, the ability to mine the digital traces of conversation on social media presents an opportunity to apply advanced machine learning algorithms as a way of tracking the diffusion of risk communication messages. In addition to utilizing big data for improving the scientific knowledge base in risk communication, there will be a need for health communication scientists and practitioners to work as part of interdisciplinary teams to improve the interfaces to these data for professionals and the public. Too much data, presented in disorganized ways, can lead to what some have referred to as “data smog.” Much work will be needed for understanding how to turn big data into knowledge, and just as important, how to turn data-informed knowledge into action.

Article

Digital Posthuman Autobiography  

Laurie McNeill

Since the 2010s, auto/biography studies have engaged in productive explorations of its intersections with theories of posthumanism. In unsettling concepts of the human, the agential speaking subject seen as central to autobiographical acts, posthumanism challenges core concerns of auto/biography (and humanism), including identity, agency, ethics, and relationality, and traditional expectations of auto/biographical narrative as focused on a (human) life, often singular and exceptional, chronicling a narrative of progress over time—the figure and product of the liberal humanist subject that posthumanism and autobiography studies have both critiqued. In its place, the posthuman autobiographical subject holds distributed, relativized agency as a member of a network through which it is co-constituted, a network that includes humans and non-humans in unhierarchized relations. Posthuman theories of autobiography examine how such webs of relation might shift understanding of the production and reception of an autobiographer and text. In digital posthuman autobiography, the auto/biographer is working in multimodal ways, across platforms, shaping and shaped by the affordances of these sites, continually in the process of becoming through dynamic engagement and interaction with the rest of the network. The human-machinic interface of such digital texts and spaces illustrates the rethinking required to account for the relational, networked subjectivity and texts that are evolving within digital platforms and practices. The role of algorithms and datafication—the process through which experiences, knowledge, and lives are turned into data—as corporate, non-consensual co-authors of online auto/biographical texts particularly raises questions about the limits and agency of the human and the auto/biographical, with software not only coaxing, coercing, and coaching certain kinds of self-representation, but also, through the aggregating process of big data, creating its own versions of subjects for its own purposes. Data portraits, data mining, and data doubles are representations based on auto/biographical source texts, but not ones the original subject or their communities have imagined for themselves. However, the affordances and collaborations created by participation in the digital web also foster a networked agency through which individuals-in-relation can testify to and document experience in collective ways, working within and beyond the norms imagined by the corporate and machinic. The potential for posthuman testimony and the proliferation of autobiographical moments or “small data” suggest the potential of digital autobiographical practices to articulate what it means to be a human-in-relation, to be alive in a network.

Article

Big Data and Communication Research  

Ralph Schroeder

Communication research has recently had an influx of groundbreaking findings based on big data. Examples include not only analyses of Twitter, Wikipedia, and Facebook, but also of search engine and smartphone uses. These can be put together under the label “digital media.” This article reviews some of the main findings of this research, emphasizing how big data findings contribute to existing theories and findings in communication research, which have so far been lacking. To do this, an analytical framework will be developed concerning the sources of digital data and how they relate to the pertinent media. This framework shows how data sources support making statements about the relation between digital media and social change. It is also possible to distinguish between a number of subfields that big data studies contribute to, including political communication, social network analysis, and mobile communication. One of the major challenges is that most of this research does not fall into the two main traditions in the study of communication, mass and interpersonal communication. This is readily apparent for media like Twitter and Facebook, where messages are often distributed in groups rather than broadcast or shared between only two people. This challenge also applies, for example, to the use of search engines, where the technology can tailor results to particular users or groups (this has been labeled the “filter bubble” effect). The framework is used to locate and integrate big data findings in the landscape of communication research, and thus to provide a guide to this emerging area.

Article

Innovation in Artificial Intelligence: Illustrations in Academia, Apparel, and the Arts  

Andreas Kaplan

Artificial intelligence (AI), commonly defined as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation,” can be classified into analytical, human-inspired, and humanized AI depending upon its application of cognitive, emotional, and social intelligence. AI’s foundations took place in the 1950s. A sequence of vicissitudes of funding, interest in, and support for AI followed subsequently. In 2015 AlphaGo, Google’s AI-driven system, won against the human grandmaster in the highly complex board game Go. This is considered one of the most significant milestones in the development of AI and marks the starting of a new period, enabling several AI innovations in a variety of sectors and industries. Higher education, the fashion industry, and the arts serve as illustrations of areas wherein ample innovation based on AI occurs. Using these domains, various angles of innovation in AI can be presented and decrypted. AI innovation in higher education, for example, indicates that at some point, AI-powered robots might take over the role of human teachers. For the moment, however, AI in academia is solely used to support human beings, not to replace them. The apparel industry, specifically fast fashion—one of the planet’s biggest polluters—shows how innovation in AI can help the sector move toward sustainability and eco-responsibility through, among other ways, improved forecasting, increased customer satisfaction, and more efficient supply chain management. An analysis of AI-driven novelty in the arts, notably in museums, shows that developing highly innovative, AI-based solutions might be a necessity for the survival of a strongly declining cultural sector. These examples all show the role AI already plays in these sectors and its likely importance in their respective futures. While AI applications imply many improvements for academia, the apparel industry, and the arts, it should come as no surprise that it also has several drawbacks. Enforcing laws and regulations concerning AI is critical in order to avoid its adverse effects. Ethics and the ethical behavior of managers and leaders in various sectors and industries is likewise crucial. Education will play an additional significant role in helping AI positively influence economies and societies worldwide. Finally, international entente (i.e., the cooperation of the world’s biggest economies and nations) must take place to ensure AI’s benefit to humanity and civilization. Therefore, these challenges and areas (i.e., enforcement, ethics, education, and entente) can be summarized as the four summons of AI.

Article

High-Resolution Thunderstorm Modeling  

Leigh Orf

Since the dawn of the digital computing age in the mid-20th century, computers have been used as virtual laboratories for the study of atmospheric phenomena. The first simulations of thunderstorms captured only their gross features, yet required the most advanced computing hardware of the time. The following decades saw exponential growth in computational power that was, and continues to be, exploited by scientists seeking to answer fundamental questions about the internal workings of thunderstorms, the most devastating of which cause substantial loss of life and property throughout the world every year. By the mid-1970s, the most powerful computers available to scientists contained, for the first time, enough memory and computing power to represent the atmosphere containing a thunderstorm in three dimensions. Prior to this time, thunderstorms were represented primarily in two dimensions, which implicitly assumed an infinitely long cloud in the missing dimension. These earliest state-of-the-art, fully three-dimensional simulations revealed fundamental properties of thunderstorms, such as the structure of updrafts and downdrafts and the evolution of precipitation, while still only roughly approximating the flow of an actual storm due computing limitations. In the decades that followed these pioneering three-dimensional thunderstorm simulations, new modeling approaches were developed that included more accurate ways of representing winds, temperature, pressure, friction, and the complex microphysical processes involving solid, liquid, and gaseous forms of water within the storm. Further, these models also were able to be run at a resolution higher than that of previous studies due to the steady growth of available computational resources described by Moore’s law, which observed that computing power doubled roughly every two years. The resolution of thunderstorm models was able to be increased to the point where features on the order of a couple hundred meters could be resolved, allowing small but intense features such as downbursts and tornadoes to be simulated within the parent thunderstorm. As model resolution increased further, so did the amount of data produced by the models, which presented a significant challenge to scientists trying to compare their simulated thunderstorms to observed thunderstorms. Visualization and analysis software was developed and refined in tandem with improved modeling and computing hardware, allowing the simulated data to be brought to life and allowing direct comparison to observed storms. In 2019, the highest resolution simulations of violent thunderstorms are able to capture processes such as tornado formation and evolution which are found to include the aggregation of many small, weak vortices with diameters of dozens of meters, features which simply cannot not be simulated at lower resolution.

Article

International Relations, Big Data, and Artificial Intelligence  

Ehud Udi Eiran

Scholars and practitioners of international relations (IR) are paying special attention to three significant ways in which artificial intelligence (AI) and big data (BD) are transforming IR, against a background of earlier debates among IR scholars about the effect of technology on the field. First, AI and BD have emerged as arenas of interstate, mostly great power competition. In this context, scholars suggest, AI and BD are important because an effective use of AI and BD adds significantly to military and economic power. The current competition in these fields, between the United States and the People’s Republic of China, brought scholars to highlight at least four ways in which AI and BD are important: (a) Automating decisions about the use of nuclear force could affect nuclear stability, but scholars still cannot agree in what direction; (b) The central role played by the private sector. This, as opposed to the Cold War era, when the state played the leading role in the development of technology ; (c) the gap between the current two great powers in these technologies is narrow, in contrast to the significant gap in favor of the United States during the Cold War; and (d) the wave of new technologies, including AI, makes weapons systems cheaper and more available for smaller powers and political entities, thus offering a possible curb on the dominance of great powers. Second, AI and BD are expected to affect national decision-making in the areas of foreign and security policies. Here, scholars highlight three possible transformations: (a) AI will allow states a path for better decision-making on security and foreign policy matters, through the optimization and speeding of existing policy processes; (b) the technology will omit some of the human shortcomings in decision-making, further optimizing the policy process; and (c) AI will be able to offer predictions about policies of other actors in the international system and create effective simulations to help manage crises. Finally, the inclusion of AI and BD in weapons systems, most notably the development of lethal autonomous weapons systems, brings the promise (or horror) of greater efficiency and lethality but also raises significant ethical questions. AI and BD are also affecting other arenas of interstate conflict including the cyber domain and information warfare.

Article

Monitoring Migrants’ Health Risk Factors for Noncommunicable Diseases  

Stefano Campostrini

Noncommunicable diseases (NCDs) have become the first cause of morbidity and mortality around the world. These have been targeted by most governments because they are associated with well-known risk factors and modifiable behaviors. Migrants present, as any population subgroup, peculiarities with regard to NCDs and, more relevantly, need specific information on associated risk factors to appropriately target policies and interventions. The country of origin, assimilation process, and many other migrant health aspects well studied in the literature can be related to migrants’ health risk factors. In most countries, existing sources of information are not sufficient or should be revised, and new sources of data should be found. Existing survey systems can meet organizational difficulties in changing their questionnaires; moreover, the number of changes in the adopted questionnaire should be limited for the sake of brevity to avoid excessive burden on respondents. Nevertheless, a limited number of additional variables can offer a lot of information on migrant health. Migrant status, country of origin, time of arrival should be included in any survey concerned about migrant health. These, along with information on other Social Determinants of Health and access to health services, can offer fundamental information to better understand migrants’ health and its evolution as they live in their host countries. Migrants are often characterized by a better health status, in comparison with the native population, which typically is lost over the years. Public health and health promotion could have a relevant role in modifying, for the better, this evolution, but this action must be supported by timely and reliable information.

Article

Applications of Web Scraping in Economics and Finance  

Piotr Śpiewanowski, Oleksandr Talavera, and Linh Vi

The 21st-century economy is increasingly built around data. Firms and individuals upload and store enormous amount of data. Most of the produced data is stored on private servers, but a considerable part is made publicly available across the 1.83 billion websites available online. These data can be accessed by researchers using web-scraping techniques. Web scraping refers to the process of collecting data from web pages either manually or using automation tools or specialized software. Web scraping is possible and relatively simple thanks to the regular structure of the code used for websites designed to be displayed in web browsers. Websites built with HTML can be scraped using standard text-mining tools, either scripts in popular (statistical) programming languages such as Python, Stata, R, or stand-alone dedicated web-scraping tools. Some of those tools do not even require any prior programming skills. Since about 2010, with the omnipresence of social and economic activities on the Internet, web scraping has become increasingly more popular among academic researchers. In contrast to proprietary data, which might not be feasible due to substantial costs, web scraping can make interesting data sources accessible to everyone. Thanks to web scraping, the data are now available in real time and with significantly more details than what has been traditionally offered by statistical offices or commercial data vendors. In fact, many statistical offices have started using web-scraped data, for example, for calculating price indices. Data collected through web scraping has been used in numerous economic and finance projects and can easily complement traditional data sources.

Article

Dynamics of Decision Making  

JoAnn Danelo Barbour

The past and the future influence the present, for decision makers are persuaded by historical patterns and styles of decision making based on social, political, and economic context, with an eye to planning, predicting, forecasting, in a sense, “futuring.” From the middle to the late 20th century, four models (rational-bureaucratic, participatory, political, and organized anarchy) embody ways of decision making that provide an historical grounding for decision makers in the first quarter of the 21st century. From the late 20th through the first two decades of the 21st century, decision makers have focused on ethical decision making, social justice, and decision making within communities. After the first two decades of the 21st century, decision making and its associative research is about holding tensions, crossing boundaries, and intersections. Decision makers will continually hold the tension between intuition and evidence as drivers of decisions. Promising research possibilities may include understanding metacognition and its role in decision making, between individual approaches to decision making and the group dynamic, stakeholders’ engagement in communicating and executing decisions, and studying the control of who has what information or who should have it. Furthermore, decision making most likely will continue to evolve towards an adaptive approach with an abundance of tools and techniques to improve both the praxis and the practice of decision making dynamics. Accordingly, trends in future research in decision making will span disciplines and emphases, encompassing transdisciplinary approaches wherein investigators work collaboratively to understand and possibly create new conceptual, theoretical, and methodological models or ways of thinking about and making decisions.

Article

The Anthropology of Policy  

Noémi Lendvai-Bainton and Paul Stubbs

The anthropology of policy as a field emerged in the 1990s in recognition of the need to understand and critically interrogate policies as important sites of classification, disciplining, and production of order and change. The anthropology of policy has developed as a critical strand challenging mainstream policy studies, public administration, and political science by insisting that the work of policy is always political. Policy worlds are seen as inextricably linked to power relations just as much as politics itself; indeed, the border between policy and politics is highly permeable. A wealth of literature that has been produced in the early 21st century has highlighted the complexities of the spatiotemporal dynamics of the deeply fragmented, unruly worlds of policy. A linear, stagist, and one-dimensional understanding of policy time fails to take account of the multiple, uneven, and contradictory temporal claims of policy. An emphasis on policy performance and affect has also highlighted the ways in which policies are always unfinished as they are mediated and translated, refused, inhabited, and reworked by those they summon. In the context of heightened policy mobility and movement, the importance of the idea of policy assemblages has emerged. Assemblages, animated by actors and actants, are always a heterogeneous combination of discourses and practices existing through unstable and contingent spatiotemporal orderings. Spaces of solidarity and fragility, policy assemblages are key sites for the making and unmaking of both hierarchies and possibilities. A critical tradition of the anthropology of policy needs to be built upon in order to offer a contribution to a broader decolonial turn. There is a need to deconstruct colonial assumptions, emphasize the relevance of colonial legacies, and develop decolonial approaches to understanding the policy world much more than has been the case thus far. In addition, there are questions not only concerning the “what” but also the “who” of an anthropology of policy. Activist anthropology plays an important role in terms of antiracism, counterhegemonic world-making, and policy otherwise, with new imaginaries and possibilities going beyond the general academic critique of a neoliberal, postneoliberal, and postdemocratic world. The challenges of big data, technological change, the crisis of democracy, and new forms of authoritarianism and angry politics all highlight the continued importance of anthropological approaches to policy.

Article

Asset Pricing: Cross-Section Predictability  

Paolo Zaffaroni and Guofu Zhou

A fundamental question in finance is the study of why different assets have different expected returns, which is intricately linked to the issue of cross-section prediction in the sense of addressing the question “What explains the cross section of expected returns?” There is vast literature on this topic. There are state-of-the-art methods used to forecast the cross section of stock returns with firm characteristics predictors, and the same methods can be applied to other asset classes, such as corporate bonds and foreign exchange rates, and to managed portfolios such mutual and hedge funds. First, there are the traditional ordinary least squares and weighted least squares methods, as well as the recently developed various machine learning approaches such as neutral networks and genetic programming. These are the main methods used today in applications. There are three measures that assess how the various methods perform. The first is the Sharpe ratio of a long–short portfolio that longs the assets with the highest predicted return and shorts those with the lowest. This measure provides the economic value for one method versus another. The second measure is an out-of-sample R 2 that evaluates how the forecasts perform relative to a natural benchmark that is the cross-section mean. This is important as any method that fails to outperform the benchmark is questionable. The third measure is how well the predicted returns explain the realized ones. This provides an overall error assessment cross all the stocks. Factor models are another tool used to understand cross-section predictability. This sheds light on whether the predictability is due to mispricing or risk exposure. There are three ways to consider these models: First, we can consider how to test traditional factor models and estimate the associated risk premia, where the factors are specified ex ante. Second, we can analyze similar problems for latent factor models. Finally, going beyond the traditional setup, we can consider recent studies on asset-specific risks. This analysis provides the framework to understand the economic driving forces of predictability.

Article

Decision Support Systems  

Sean B. Eom

A decision support system is an interactive human–computer decision-making system that supports decision makers rather than replaces them, utilizing data and models. It solves unstructured and semi-structured problems with a focus on effectiveness rather than efficiency in decision processes. In the early 1970s, scholars in this field began to recognize the important roles that decision support systems (DSS) play in supporting managers in their semistructured or unstructured decision-making activities. Over the past five decades, DSS has made progress toward becoming a solid academic field. Nevertheless, since the mid-1990s, the inability of DSS to fully satisfy a wide range of information needs of practitioners provided an impetus for a new breed of DSS, business intelligence systems (BIS). The academic discipline of DSS has undergone numerous changes in technological environments including the adoption of data warehouses. Until the late 1990s, most textbooks referred to “decision support systems.” Nowadays, many of them have replaced “decision support systems” with “business intelligence.” While DSS/BIS began in academia and were quickly adopted in business, in recent years these tools have moved into government and the academic field of public administration. In addition, modern political campaigns, especially at the national level, are based on data analytics and the use of big data analytics. The first section of this article reviews the development of DSS as an academic discipline. The second section discusses BIS and their components (the data warehousing environment and the analytical environment). The final section introduces two emerging topics in DSS/BIS: big data analytics and cloud computing analytics. Before the era of big data, most data collected by business organizations could easily be managed by traditional relational database management systems with a serial processing system. Social networks, e-business networks, Internet of Things (IoT), and many other wireless sensor networks are generating huge volumes of data every day. The challenge of big data has demanded a new business intelligence infrastructure with new tools (Hadoop cluster, the data warehousing environment, and the business analytical environment).