1-17 of 17 Results

  • Keywords: Big Data x
Clear all

Article

Janet Chan

Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness. Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.

Article

Oscar E. Cariceo, Murali Nair, and Wahaj Bokhari

Predictive analytics is a set of techniques and an advanced methodological and research approach that seeks to reach conclusions about the future, rather than explanations of specific issues or phenomena. The fast growth in popularity and application of data science to different businesses and activities, such as human services and nonprofit management, are related to the emergence and consolidation of big data. In terms of digital networking, the availability of data produced by individuals every day is enormous. Tools and techniques such as machine learning, deep learning, visualization, time series analysis, networking analysis, natural language processing, and text mining may help support evidence-based practice for social workers. Predictive analytics and big data offer an opportunity to enhance innovative social change and people’s well-being.

Article

Ciara Heavin and Frederic Adam

Since the 1960s, information technology (IT)/information systems (IS) professionals, data practitioners, and senior managers have focused on developing decision support capabilities to enhance organizational decision making. Initially, this quest was mostly driven by successive generations of technological advances. However, in the last decade, the pace at which large volumes of diverse data can be collected and processed, new algorithmic advances, and the development of computational infrastructure such as graphics processing units (GPUs) and tensor processing units (TPUs) have created new opportunities for global businesses in areas such as financial services, manufacturing, retail, sports, and healthcare. At this point, it seems that most industries and public services could potentially be revolutionized by these new techniques. The word analytics has replaced the previous individual components of computerized decision support technologies that have been developed under various labels in the past (). Much of the traditional researcher and practitioner communities who were concerned with decision support, decision support systems (DSSs), and business intelligence (BI) have reoriented their attention to innovative tools and technologies to derive value from new data streams through artificial intelligence (AI) and analytics. Identifying the main areas of focus for decision support and analytics provides a stimulus for new ideas for researchers, managers, and IS/IT and data professionals. These stakeholders need to undertake new empirical studies that explain how analytics can be used to develop and enhance new forms of decision support while considering the dilemmas that may arise due to the data capture and analyses of new digital data streams.

Article

Kevin Arceneaux and Martin Johnson

Since the mid-20th century, communication researchers have recognized that audience members selectively expose themselves to information and opinions congenial to their pre-existing views. While this was a controversial idea during the broadcast era of mass media, the expansion of media choice on television and use of information communication technology has brought increased attention to selectivity among audience members. Contemporary scholarship investigates the extent to which people select proattitudinal information or avoid counterattitudinal information and the role these choices play in the effects of media messages on viewers. While selective exposure is a broader phenomenon, this article substantively focuses on the use of politically partisan media, especially the research methods used to investigate media selectivity and its effects. This literature manifests an increased attention to measurement, especially how we measure the core concept of media exposure, novel experimental designs intended to allow investigators to directly view individual choice behavior in complex media environments, and attention to new sources of large-scale data from social media and large text samples. Scholars agree that partisan websites and cable networks provide content politically distinct enough to allow viewers to segregate themselves into liberal and conservative audiences for news but that this kind of polarized viewing is only part of how viewers use media today. A nuanced picture of selectivity shows audiences selecting congenial content but employing broader media use repertoires as well. The mechanisms and effects of media selectivity are psychologically complex and sensitive to contextual factors such as the political issue under consideration.

Article

Martin Obschonka and Christian Fisch

Advances in Artificial Intelligence (AI) are intensively shaping businesses and the economy as a whole, and AI-related research is exploding in many domains of business and management research. In contrast, AI has received relatively little attention within the domain of entrepreneurship research, while many entrepreneurship scholars agree that AI will likely shape entrepreneurship research in deep, disruptive ways. When summarizing both the existing entrepreneurship literature on AI and potential avenues for future research, the growing relevance of AI for entrepreneurship research manifests itself along two dimensions. First, AI applications in the real world establish a distinct research topic (e.g., whether and how entrepreneurs and entrepreneurial ventures use and develop AI-based technologies, or how AI can function as an external enabler that generates and enhances entrepreneurial outcomes). In other words, AI is changing the research object in entrepreneurship research. The second dimension refers to drawing on AI-based research methods, such as big data techniques or AI-based forecasting methods. Such AI-based methods open several avenues for researchers to gain new, influential insights into entrepreneurs and entrepreneurial ventures that are more difficult to assess using traditional methods. In other words, AI is changing the research methods. Given that, so far, human intelligence could not fully uncover and comprehend the secrets behind the entrepreneurial process that is so deeply embedded in uncertainty and opportunity, AI-supported research methods might achieve new breakthrough discoveries. We conclude that the field needs to embrace AI as a topic and research method more enthusiastically while maintaining the essential research standards and scientific rigor that guarantee the field’s well-being, reputation, and impact.

Article

In recent years, a variety of novel digital data sources, colloquially referred to as “big data,” have taken the popular imagination by storm. These data sources include, but are not limited to, digitized administrative records, activity on and contents of social media and internet platforms, and readings from sensors that track physical and environmental conditions. Some have argued that such data sets have the potential to transform our understanding of human behavior and society, constituting a meta-field known as computational social science. Criminology and criminal justice are no exception to this excitement. Although researchers in these areas have long used administrative records, in recent years they have increasingly looked to the most recent versions of these data, as well as other novel resources, to pursue new questions and tools.

Article

Bureaucracies and their processing of information have evolved along with the formation of states, from absolutist to welfare state and beyond. Digitalization has both reflected and expedited these changes, but it is important to keep in mind that digital-era governance is also conditioned by existing information resources as well as institutional practices and administrative culture. To understand the digital transformations of states, one needs to engage in contextual analysis of the actual changes that might show even paradoxical and unintended effects. Initially, the studies on the effects of information systems on bureaucracies focused on single organizations. But the focus has since shifted toward digitally enhanced interaction with the society in terms of service provision, responsiveness, participatory governance, and deliberation, as well as economic exploitation of public data. Indeed, the history of digitalization in bureaucracies also reads as an account of its opening. But there are also contradictory developments concerning the use of big data, learning systems, and digital surveillance technologies that have created new confidential or secretive domains of information processing in bureaucracies. Another pressing topic is automation of decision making, which can range from rules-based decisions to learning systems. This has created new demands for control, both in terms of citizen information rights as well as accountability systems. While one should be cautious about claims of revolutionary changes, the increasing tempo and interconnectedness characterizing digitalization of bureaucratic activities pose major challenges for public accountability. The historical roots of state information are important in understanding changes of information processing in public administration through digitalization, highlighting the transformations of states and new stakeholders and forms of collaboration, as well as the emerging questions of accountability. But instead of readily assuming structural changes, one should engage in contextualized analysis of the actual effects of digitalization to fully understand them.

Article

The presence of large-scale data systems can be felt, consciously or not, in almost every facet of modern life, whether through the simple act of selecting travel options online, purchasing products from online retailers, or navigating through the streets of an unfamiliar neighborhood using global positioning system (GPS) mapping. These systems operate through the momentum of big data, a term introduced by data scientists to describe a data-rich environment enabled by a superconvergence of advanced computer-processing speeds and storage capacities; advanced connectivity between people and devices through the Internet; the ubiquity of smart, mobile devices and wireless sensors; and the creation of accelerated data flows among systems in the global economy. Some researchers have suggested that big data represents the so-called fourth paradigm in science, wherein the first paradigm was marked by the evolution of the experimental method, the second was brought about by the maturation of theory, the third was marked by an evolution of statistical methodology as enabled by computational technology, while the fourth extended the benefits of the first three, but also enabled the application of novel machine-learning approaches to an evidence stream that exists in high volume, high velocity, high variety, and differing levels of veracity. In public health and medicine, the emergence of big data capabilities has followed naturally from the expansion of data streams from genome sequencing, protein identification, environmental surveillance, and passive patient sensing. In 2001, the National Committee on Vital and Health Statistics published a road map for connecting these evidence streams to each other through a national health information infrastructure. Since then, the road map has spurred national investments in electronic health records (EHRs) and motivated the integration of public surveillance data into analytic platforms for health situational awareness. More recently, the boom in consumer-oriented mobile applications and wireless medical sensing devices has opened up the possibility for mining new data flows directly from altruistic patients. In the broader public communication sphere, the ability to mine the digital traces of conversation on social media presents an opportunity to apply advanced machine learning algorithms as a way of tracking the diffusion of risk communication messages. In addition to utilizing big data for improving the scientific knowledge base in risk communication, there will be a need for health communication scientists and practitioners to work as part of interdisciplinary teams to improve the interfaces to these data for professionals and the public. Too much data, presented in disorganized ways, can lead to what some have referred to as “data smog.” Much work will be needed for understanding how to turn big data into knowledge, and just as important, how to turn data-informed knowledge into action.

Article

Since the dawn of the digital computing age in the mid-20th century, computers have been used as virtual laboratories for the study of atmospheric phenomena. The first simulations of thunderstorms captured only their gross features, yet required the most advanced computing hardware of the time. The following decades saw exponential growth in computational power that was, and continues to be, exploited by scientists seeking to answer fundamental questions about the internal workings of thunderstorms, the most devastating of which cause substantial loss of life and property throughout the world every year. By the mid-1970s, the most powerful computers available to scientists contained, for the first time, enough memory and computing power to represent the atmosphere containing a thunderstorm in three dimensions. Prior to this time, thunderstorms were represented primarily in two dimensions, which implicitly assumed an infinitely long cloud in the missing dimension. These earliest state-of-the-art, fully three-dimensional simulations revealed fundamental properties of thunderstorms, such as the structure of updrafts and downdrafts and the evolution of precipitation, while still only roughly approximating the flow of an actual storm due computing limitations. In the decades that followed these pioneering three-dimensional thunderstorm simulations, new modeling approaches were developed that included more accurate ways of representing winds, temperature, pressure, friction, and the complex microphysical processes involving solid, liquid, and gaseous forms of water within the storm. Further, these models also were able to be run at a resolution higher than that of previous studies due to the steady growth of available computational resources described by Moore’s law, which observed that computing power doubled roughly every two years. The resolution of thunderstorm models was able to be increased to the point where features on the order of a couple hundred meters could be resolved, allowing small but intense features such as downbursts and tornadoes to be simulated within the parent thunderstorm. As model resolution increased further, so did the amount of data produced by the models, which presented a significant challenge to scientists trying to compare their simulated thunderstorms to observed thunderstorms. Visualization and analysis software was developed and refined in tandem with improved modeling and computing hardware, allowing the simulated data to be brought to life and allowing direct comparison to observed storms. In 2019, the highest resolution simulations of violent thunderstorms are able to capture processes such as tornado formation and evolution which are found to include the aggregation of many small, weak vortices with diameters of dozens of meters, features which simply cannot not be simulated at lower resolution.

Article

JoAnn Danelo Barbour

The past and the future influence the present, for decision makers are persuaded by historical patterns and styles of decision making based on social, political, and economic context, with an eye to planning, predicting, forecasting, in a sense, “futuring.” From the middle to the late 20th century, four models (rational-bureaucratic, participatory, political, and organized anarchy) embody ways of decision making that provide an historical grounding for decision makers in the first quarter of the 21st century. From the late 20th through the first two decades of the 21st century, decision makers have focused on ethical decision making, social justice, and decision making within communities. After the first two decades of the 21st century, decision making and its associative research is about holding tensions, crossing boundaries, and intersections. Decision makers will continually hold the tension between intuition and evidence as drivers of decisions. Promising research possibilities may include understanding metacognition and its role in decision making, between individual approaches to decision making and the group dynamic, stakeholders’ engagement in communicating and executing decisions, and studying the control of who has what information or who should have it. Furthermore, decision making most likely will continue to evolve towards an adaptive approach with an abundance of tools and techniques to improve both the praxis and the practice of decision making dynamics. Accordingly, trends in future research in decision making will span disciplines and emphases, encompassing transdisciplinary approaches wherein investigators work collaboratively to understand and possibly create new conceptual, theoretical, and methodological models or ways of thinking about and making decisions.

Article

Noncommunicable diseases (NCDs) have become the first cause of morbidity and mortality around the world. These have been targeted by most governments because they are associated with well-known risk factors and modifiable behaviors. Migrants present, as any population subgroup, peculiarities with regard to NCDs and, more relevantly, need specific information on associated risk factors to appropriately target policies and interventions. The country of origin, assimilation process, and many other migrant health aspects well studied in the literature can be related to migrants’ health risk factors. In most countries, existing sources of information are not sufficient or should be revised, and new sources of data should be found. Existing survey systems can meet organizational difficulties in changing their questionnaires; moreover, the number of changes in the adopted questionnaire should be limited for the sake of brevity to avoid excessive burden on respondents. Nevertheless, a limited number of additional variables can offer a lot of information on migrant health. Migrant status, country of origin, time of arrival should be included in any survey concerned about migrant health. These, along with information on other Social Determinants of Health and access to health services, can offer fundamental information to better understand migrants’ health and its evolution as they live in their host countries. Migrants are often characterized by a better health status, in comparison with the native population, which typically is lost over the years. Public health and health promotion could have a relevant role in modifying, for the better, this evolution, but this action must be supported by timely and reliable information.

Article

Communication research has recently had an influx of groundbreaking findings based on big data. Examples include not only analyses of Twitter, Wikipedia, and Facebook, but also of search engine and smartphone uses. These can be put together under the label “digital media.” This article reviews some of the main findings of this research, emphasizing how big data findings contribute to existing theories and findings in communication research, which have so far been lacking. To do this, an analytical framework will be developed concerning the sources of digital data and how they relate to the pertinent media. This framework shows how data sources support making statements about the relation between digital media and social change. It is also possible to distinguish between a number of subfields that big data studies contribute to, including political communication, social network analysis, and mobile communication. One of the major challenges is that most of this research does not fall into the two main traditions in the study of communication, mass and interpersonal communication. This is readily apparent for media like Twitter and Facebook, where messages are often distributed in groups rather than broadcast or shared between only two people. This challenge also applies, for example, to the use of search engines, where the technology can tailor results to particular users or groups (this has been labeled the “filter bubble” effect). The framework is used to locate and integrate big data findings in the landscape of communication research, and thus to provide a guide to this emerging area.

Article

Piotr Śpiewanowski, Oleksandr Talavera, and Linh Vi

The 21st-century economy is increasingly built around data. Firms and individuals upload and store enormous amount of data. Most of the produced data is stored on private servers, but a considerable part is made publicly available across the 1.83 billion websites available online. These data can be accessed by researchers using web-scraping techniques. Web scraping refers to the process of collecting data from web pages either manually or using automation tools or specialized software. Web scraping is possible and relatively simple thanks to the regular structure of the code used for websites designed to be displayed in web browsers. Websites built with HTML can be scraped using standard text-mining tools, either scripts in popular (statistical) programming languages such as Python, Stata, R, or stand-alone dedicated web-scraping tools. Some of those tools do not even require any prior programming skills. Since about 2010, with the omnipresence of social and economic activities on the Internet, web scraping has become increasingly more popular among academic researchers. In contrast to proprietary data, which might not be feasible due to substantial costs, web scraping can make interesting data sources accessible to everyone. Thanks to web scraping, the data are now available in real time and with significantly more details than what has been traditionally offered by statistical offices or commercial data vendors. In fact, many statistical offices have started using web-scraped data, for example, for calculating price indices. Data collected through web scraping has been used in numerous economic and finance projects and can easily complement traditional data sources.

Article

Since the 2010s, auto/biography studies have engaged in productive explorations of its intersections with theories of posthumanism. In unsettling concepts of the human, the agential speaking subject seen as central to autobiographical acts, posthumanism challenges core concerns of auto/biography (and humanism), including identity, agency, ethics, and relationality, and traditional expectations of auto/biographical narrative as focused on a (human) life, often singular and exceptional, chronicling a narrative of progress over time—the figure and product of the liberal humanist subject that posthumanism and autobiography studies have both critiqued. In its place, the posthuman autobiographical subject holds distributed, relativized agency as a member of a network through which it is co-constituted, a network that includes humans and non-humans in unhierarchized relations. Posthuman theories of autobiography examine how such webs of relation might shift understanding of the production and reception of an autobiographer and text. In digital posthuman autobiography, the auto/biographer is working in multimodal ways, across platforms, shaping and shaped by the affordances of these sites, continually in the process of becoming through dynamic engagement and interaction with the rest of the network. The human-machinic interface of such digital texts and spaces illustrates the rethinking required to account for the relational, networked subjectivity and texts that are evolving within digital platforms and practices. The role of algorithms and datafication—the process through which experiences, knowledge, and lives are turned into data—as corporate, non-consensual co-authors of online auto/biographical texts particularly raises questions about the limits and agency of the human and the auto/biographical, with software not only coaxing, coercing, and coaching certain kinds of self-representation, but also, through the aggregating process of big data, creating its own versions of subjects for its own purposes. Data portraits, data mining, and data doubles are representations based on auto/biographical source texts, but not ones the original subject or their communities have imagined for themselves. However, the affordances and collaborations created by participation in the digital web also foster a networked agency through which individuals-in-relation can testify to and document experience in collective ways, working within and beyond the norms imagined by the corporate and machinic. The potential for posthuman testimony and the proliferation of autobiographical moments or “small data” suggest the potential of digital autobiographical practices to articulate what it means to be a human-in-relation, to be alive in a network.

Article

Sean B. Eom

A decision support system is an interactive human–computer decision-making system that supports decision makers rather than replaces them, utilizing data and models. It solves unstructured and semi-structured problems with a focus on effectiveness rather than efficiency in decision processes. In the early 1970s, scholars in this field began to recognize the important roles that decision support systems (DSS) play in supporting managers in their semistructured or unstructured decision-making activities. Over the past five decades, DSS has made progress toward becoming a solid academic field. Nevertheless, since the mid-1990s, the inability of DSS to fully satisfy a wide range of information needs of practitioners provided an impetus for a new breed of DSS, business intelligence systems (BIS). The academic discipline of DSS has undergone numerous changes in technological environments including the adoption of data warehouses. Until the late 1990s, most textbooks referred to “decision support systems.” Nowadays, many of them have replaced “decision support systems” with “business intelligence.” While DSS/BIS began in academia and were quickly adopted in business, in recent years these tools have moved into government and the academic field of public administration. In addition, modern political campaigns, especially at the national level, are based on data analytics and the use of big data analytics. The first section of this article reviews the development of DSS as an academic discipline. The second section discusses BIS and their components (the data warehousing environment and the analytical environment). The final section introduces two emerging topics in DSS/BIS: big data analytics and cloud computing analytics. Before the era of big data, most data collected by business organizations could easily be managed by traditional relational database management systems with a serial processing system. Social networks, e-business networks, Internet of Things (IoT), and many other wireless sensor networks are generating huge volumes of data every day. The challenge of big data has demanded a new business intelligence infrastructure with new tools (Hadoop cluster, the data warehousing environment, and the business analytical environment).

Article

David Bawden and Lyn Robinson

For almost as long as there has been recorded information, there has been a perception that humanity has been overloaded by it. Concerns about “too much to read” have been expressed for many centuries, and made more urgent since the arrival of ubiquitous digital information in the late 20th century. The historical perspective is a necessary corrective to the often, and wrongly, held view that it is associated solely with the modern digital information environment and with social media in particular. However, as society fully experiences Floridi’s Fourth Revolution, and moves into hyper-history (with society dependent on, and defined by, information and communication technologies) and the infosphere (an information environment distinguished by a seamless blend of online and offline information activity), individuals and societies are dependent on and formed by information in an unprecedented way, and information overload needs to be taken more seriously than ever. Overload has been claimed to be both the major issue of our time and a complete nonissue. It has, as will be noted later, been noted as an important factor in many areas, including politics and governance. It has been cited as an important factor in a wide range of areas, from business to literature. The information overload phenomenon has been known by many different names, including: information overabundance, infobesity, infoglut, data smog, information pollution, information fatigue, social media fatigue, social media overload, information anxiety, library anxiety, infostress, infoxication, reading overload, communication overload, cognitive overload, information violence, and information assault. There is no single generally accepted definition, but it can best be understood as the situation that arises when there is so much relevant and potentially useful information available that it becomes a hindrance rather than a help. Its essential nature has not changed with evolving technology, although its causes and proposed solutions have changed significantly. The best ways of avoiding overload, individually and socially, appear to lie in a variety of coping strategies, such as filtering, withdrawing, queuing, and “satisficing.” Better design of information systems, effective personal information management, and the promotion of digital and media literacies also have a part to play. Overload may perhaps best be overcome by seeking a mindful balance in consuming information and in finding understanding.

Article

Political economy of the media includes several domains including journalism, broadcasting, advertising, and information and communication technology. A political economy approach analyzes the power relationships between politics, mediation, and economics. First, there is a need to identify the intellectual history of the field, focusing on the establishment and growth of the political economy of media as an academic field. Second is the discussion of the epistemology of the field by emphasizing several major characteristics that differentiate it from other approaches within media and communication research. Third, there needs an understanding of the regulations affecting information and communication technologies (ICTs) and/or the digital media-driven communication environment, especially charting the beginnings of political economy studies of media within the culture industry. In particular, what are the ways political economists develop and use political economy in digital media and the new media milieu driven by platform technologies in the three new areas of digital platforms, big data, and digital labor. These areas are crucial for analysis not only because they are intricately connected, but also because they have become massive, major parts of modern capitalism.