1-20 of 60 Results  for:

  • Keywords: risk x
Clear all

Article

Changing Disaster Vulnerability and Capability in Aging Populations  

Jennifer Whytlaw and Nicole S. Hutton

The aging population, also referred to as elderly or seniors, represents a demographic of growing significance for disaster management. The population pyramid, an important indicator of population growth, stability, and decline, has shifted from the typical pyramid shape into more of a dome shape when viewing trends globally. While these demographic shifts in age structure are unique to individual countries, adjustments in disaster management are needed to reduce the risk of aging populations increasingly affected by hazards. Risk is especially evident when considering where aging populations live, as proximity to environmental hazards such as flooding, tropical storm surge, fires, and extreme weather resulting in heat and cold increase their risk. Aging populations may live alone or together in retirement communities and senior living facilities where the respective isolation or high density of older adults present specific risks. There is a concern in areas with high economic productivity, also considered post-industrial areas, where the population consists more of those who are aging and less of those who are younger to support the labor needs of the market and more specifically to support and engage aging populations. This disparity becomes even more prominent in specific sectors such as healthcare, including senior living assistance. In developing economies, the young are increasingly leaving traditionally intergenerational households to seek greater economic opportunities in cities, leaving many seniors on their own. Thus, risk reduction strategies must be conscious of the needs and contributions of seniors as well as the capacity of the workforce to implement them. The integration of aging populations within disaster management through accommodation and consultation varies across the globe. Provision of services for and personal agency among senior populations can mitigate vulnerabilities associated with age, as well as compounding factors such as medical fragility, societal interaction, and income. Experience, mobility, and socioeconomic capabilities affect decision making and outcomes of aging populations in hazardous settings. Therefore, the means of involvement in disaster planning should be adapted to accommodate the sociocultural, economic, and environmental realities of aging populations.

Article

Fear Arousal and Health and Risk Messaging  

Jessica Gall Myrick and Robin L. Nabi

Fear is a negatively valenced discrete emotional state that is an inherent part of the human experience. With strong evolutionary roots, fear serves important functions, including alerting people to present threats and motivating action to avoid future threats. As such, fear is an emotion that frequently attracts the attention of scholars and message designers who hope to persuade audiences to change their behavior in light of potential threats to well-being and public safety. Several theories have aimed to describe the effects of fear-based appeals on audiences, focusing largely on the cognitive correlates of fear (i.e., severity and susceptibility) and their subsequent impacts on persuasive outcomes. However, more recent theorizing has returned to a focus on the influence that the emotion of fear itself has on attitude and behavior change. Given that many health-oriented fear appeals have been shown to evoke multiple emotions, including anger, disgust, and sadness, current theorizing has taken a mixed-emotions or emotional flow perspective to provide a deeper understanding of fear appeal effects. Further, individual differences have been considered to determine who is most likely to experience fear during and after message consumption. In addition to fear appeals that purposefully aim to scare audiences to motivate attitude and behavior change, recent work suggests that fear can be generated by other forms of messages (e.g., news accounts, social media posts, interpersonal conversations) that may influence receivers’ approaches to health issues. Moreover, research also suggests that fear may motivate social sharing of messages, which can in turn allow for more widespread influence of fear-based messages.

Article

Motivated Information Management and Other Approaches to Information Seeking  

Walid A. Afifi

The turn of the 21st century has seen an explosion of frameworks that account for individuals’ decisions to seek or avoid information related to health risks. The four dominant frameworks are Risk Perception Attitude Framework, the Risk Information Seeking and Process model, the Planned Risk Information Seeking Model, and the Theory of Motivated Information Management. A comparison of the constructs within each and an examination of the related empirical tests reveal important insights into (a) factors that have consistently been shown to shape these decisions across these approaches and (b) constructs in need of additional theorizing and empirical testing. Specifically, the analysis suggests that uncertainty, efficacy, affect, risk perceptions, and subjective norms all play crucial roles in accounting for decisions to seek or avoid risk-related information. However, inconsistencies in the direction of influence for uncertainty or information discrepancy, risk perceptions, and negative affect argue for the need for considerably more theoretical clarity and empirical rigor in investigations of the ways in which these experiences shape decision making in these contexts.

Article

Dynamic Water Pricing  

R. Quentin Grafton, Long Chu, and Paul Wyrwoll

Water insecurity poses threats to both human welfare and ecological systems. Global water abstractions (extractions) have increased threefold over the period 1960–2010, and an increasing trend in abstractions is expected to continue. Rising water use is placing significant pressure on water resources, leading to depletion of surface and underground water systems, and exposing up to 4 billion people to high levels of seasonal or persistent water insecurity. Climate change is deepening the risks of water scarcity by increasing rainfall variability. By the 2050s, the water–climate change challenge could cause an additional 620 million people to live with chronic water shortage and increase by 75% the proportion of cropland exposed to drought. While there is no single solution to water scarcity or water justice, increasing the benefits of water use through better planning and incentives can help. Pricing is an effective tool to regulate water consumption for irrigation, for residential uses, and especially in response to droughts. For a water allocation to be efficient, the water price paid by users should be equal to the marginal economic cost of water supply. Accounting for all costs of supply is important even though, in practice, water prices are typically set to meet a range of social and political objectives. Dynamic water pricing provides a tool for increasing allocative efficiency in short-term water allocation and the long-term planning of water resources. A dynamic relationship exists between water consumption at a point in time and water scarcity in the future. Thus, dynamic water pricing schemes may take into account the benefit of consuming water at that time and also the water availability that could be used should a drought occur in the future. Dynamic water pricing can be applied with the risk-adjusted user cost (RAUC), which measures the risk impact of current water consumption on the welfare of future water users.

Article

Moral Panics  

Chas Critcher

The concept of moral panic was first developed in the United Kingdom in the early 1960s, principally by Stan Cohen, initially for the purpose of analyzing the definition of and social reaction to youth subcultures as a social problem. Cohen provided a “processual” model of how any new social problem would develop: who would promote it and why, whose support they would need for their definition to take hold, and the often-crucial role played by the mass media and institutions of social control. In the early 1990s, Erich Goode and Nachman Ben-Yehuda produced an “attributional” model that placed more emphasis on strict definition than cultural processes. The two models have subsequently been applied to a range of putative social problems which now can be recognized as falling into five principal clusters: street crime, drug and alcohol consumption, immigration, child abuse (including pedophilia), and media technologies. Most studies have been conducted in Anglophone and European countries, but gradually, the concept is increasing its geographical reach. As a consequence, we now know a good deal about how and why social problems come to be constructed as moral panics in democratic societies. This approach has nevertheless been criticized for its casual use of language, denial of agency to those promoting and supporting moral panics, and an oversimplified and outdated view of mass media, among other things. As proponents and opponents of moral panic analysis continue to debate the essentials, the theoretical context has shifted dramatically. Moral panic has an uncertain relationship to many recent developments in sociological and criminological thought. It threatens to be overwhelmed or sidelined by new insights from theories of moral regulation or risk, conceptualizations of the culture of fear, or the social psychology of collective emotion. Yet as an interdisciplinary project, it continues, despite its many flaws, to demand sustained attention from analysts of social problem construction.

Article

Anthropologies of Cancer  

Nickolas Surawy-Stepney and Carlo Caduff

Cancer is a relatively new subject for the discipline of anthropology, but scholarship on the topic has already yielded a distinct and important body of literature. In biomedical terms, cancer can be thought of as the wide range of conditions characterized by the uncontrolled (and ultimately pathological) proliferation of cells. It is a disease that is responsible for the deaths of millions of people worldwide each year. As such, it is the focus of a vast number of discourses and practices in multiple areas, ranging from scientific research and media discussion to health insurance and government regulation, to name just a few. Anthropologists concerned with cancer typically use the methodology that is a hallmark of the discipline, long-term ethnographic fieldwork, in order to investigate these discourses and practices. This involves conducting participant observation among doctors, patients, nurses, family members, scientists, politicians, policymakers, and pharmaceutical representatives. Cancer is examined as a lived experience, revealing the numerous ways that local, regional, national, and transnational histories and politics shape the embodied realities of disease. Anthropologists also investigate the regimes of risk and statistical analysis to which bodies are subjected and the technologies around cancer, such as methods of screening or vaccination that aim to prevent it and the different ways in which these and other interventions and technologies fit into—or push uneasily against—the local words in which they are implemented. Anthropologists aim to look beyond the problem as simply one of biology and medicine, instead investigating cancer as pervasive within multiple dimensions of social, cultural, political, and economic life. Anthropological studies displace the prominent biomedical notion that cancers are the same in diverse locations and reveal the incoherence and intractability of cancer as an object. In paying close attention to this object in varied settings, anthropologists offer a critical account of discourses and practices that destabilize and decenter some of the assumptions on which global oncology is based.

Article

Flood Risk Analysis  

Bruno Merz

Floods affect more people worldwide than any other natural hazard. Flood risk results from the interplay of a range of processes. For river floods, these are the flood-triggering processes in the atmosphere, runoff generation in the catchment, flood waves traveling through the river network, possibly flood defense failure, and finally, inundation and damage processes in the flooded areas. In addition, ripple effects, such as regional or even global supply chain disruptions, may occur. Effective and efficient flood risk management requires understanding and quantifying the flood risk and its possible future developments. Hence, risk analysis is a key element of flood risk management. Risk assessments can be structured according to three questions: What can go wrong? How likely is it that it will happen? If it goes wrong, what are the consequences? Before answering these questions, the system boundaries, the processes to be included, and the detail of the analysis need to be carefully selected. One of the greatest challenges in flood risk analyses is the identification of the set of failure or damage scenarios. Often, extreme events beyond the experience of the analyst are missing, which may bias the risk estimate. Another challenge is the estimation of probabilities. There are at most a few observed events where data on the flood situation, such as inundation extent, depth, and loss are available. That means that even in the most optimistic situation there are only a few data points to validate the risk estimates. The situation is even more delicate when the risk has to be quantified for important infrastructure objects, such as breaching of a large dam or flooding of a nuclear power plant. Such events are practically unrepeatable. Hence, estimating of probabilities needs to be based on all available evidence, using observations whenever possible, but also including theoretical knowledge, modeling, specific investigations, experience, or expert judgment. As a result, flood risk assessments are often associated with large uncertainties. Examples abound where authorities, people at risk, and disaster management have been taken by surprise due to unexpected failure scenarios. This is not only a consequence of the complexity of flood risk systems, but may also be attributed to cognitive biases, such as being overconfident in the risk assessment. Hence, it is essential to ask: How wrong can the risk analysis be and still guarantee that the outcome is acceptable?

Article

Natural Hazards Governance in Mexico  

Elizabeth Mansilla

As a result of the earthquakes that occurred in September 1985 and their human and material consequences, disaster care in Mexico became institutionalized and acquired the rank of public policy when the first national civil protection law was published years later. More than 30 years after the creation of the National Civil Protection System, there have been some important advances; however, they have not been translated into higher levels of safety for populations exposed to risk. On the contrary, the evidence shows that the country’s risk, as well as the number of disasters and associated material losses, increase year by year. To a large extent, this stems from an approach based predominantly on post-disaster response by strengthening preparedness and emergency response capacities and creating financial mechanisms to address reconstruction processes, as opposed to broader approaches seeking to address the root causes of risk and disasters. Post-disaster actions and reconstruction processes have failed to achieve acceptable levels of efficiency, and disorganization and misuse of resources that should benefit disaster-affected populations still prevails.

Article

Natural Hazards Governance: An Overview of the Field  

Thomas A. Birkland

The field of natural hazards governance has changed substantially since the 1970s as the breadth and severity of natural hazards have grown. These changes have been driven by greater social scientific knowledge around natural hazards and disasters, and by changes in structure of natural hazards governance. The governance of issues relating to natural hazards is challenging because of the considerable complexity inherent in preparing for, responding to, mitigating, or recovering from disasters.

Article

Linking Hazard Vulnerability, Risk Reduction, and Adaptation  

Jörn Birkmann and Joanna M. McMillan

The concepts of vulnerability, disaster risk reduction and climate change adaptation are interlinked. Risk reduction requires a focus not just on the hazards themselves or on the people and structures exposed to hazards but on the vulnerability of those exposed. Vulnerability helps with the identification of root causes that make people or structures susceptible to being affected by natural and climate-related hazards. It is therefore an essential component of reducing risk of disasters and of adapting to climate change. The need to better assess and acknowledge vulnerability has been recognized by several communities of thought and practice, including the Disaster Risk Reduction (DRR) and Climate Change Adaptation (CCA) communities. The concept of vulnerability was introduced during the 1980s as a way to better understand the differential consequences of similar hazard events and differential impacts of climate change on different societies or social groups and physical structures. Since then, the concept gradually became an integral part of discourses around disaster risk reduction and climate change adaptation. Although the history of the emergence of vulnerability concepts and the different perspectives of these communities mean the way they frame vulnerability differs, the academic discourse has reached wide agreement that risk—and actual harm and losses—are not just caused by physical events apparently out of human control but primarily by what is exposed and vulnerable to those events. In the international policy arena, vulnerability, risk, and adaptation concepts are now integrated into the global agenda on sustainable development, disaster risk reduction, and climate change. In the context of international development projects and financial aid, the terms and concepts are increasingly used and applied. However, there is still too little focus on addressing underlying vulnerabilities.

Article

Adolescent Brain Development  

Jessica M. Black

Although it was once widely held that development through toddlerhood was the only significant time of tremendous brain growth, findings from neuroscience have identified adolescence as a second significant period of brain-based changes. Profound modification of brain structure, function, and connectivity, paired with heightened sensitivity to environment, places adolescence both as a heightened period of risk and importantly as a time of tremendous opportunity. These findings are of key relevance for social-work policy and practice, for they speak to the ways in which the adolescent brain both is vulnerable to adverse conditions and remains responsive to positive environmental input such as interventions that support recovery and resilience.

Article

Communications Research in Using Genomics for Health Promotion  

Jada G. Hamilton, Jennifer L. Hay, and Colleen M. McBride

It was expected that personalized risk information generated by genetic discovery would motivate risk-reducing behaviors. However, though research in this field is relatively limited, most studies have found no evidence of strong negative nor positive psychological or behavioral influences of providing genetic information to improve individual health behaviors. As noted by systematic reviews and agenda-setting commentaries, these null findings may be due to numerous weaknesses in the research approaches taken to date. These include issues related to study samples and design, as well as the motivational potency of risk communications. Moreover, agenda-setting commentaries have suggested areas for improvement, calling for expanded consideration of health outcomes beyond health behaviors to include information exchange and information-seeking outcomes and to consider these influences at the interpersonal and population levels. A new generation of research is adopting these recommendations. For example, there is a growing number of studies that are using communication theory to inform the selection of potential moderating factors and their effects on outcomes in understanding interpersonal effects of shared genetic risk. Researchers are taking advantage of natural social experiments to assess the general public’s understanding of genetics and inform approaches to improve their facility with the information. Additionally, there are examples of risk communication approaches addressing the complexity of genetic and environmental contributors to health outcomes. Although the pace of this translation research continues to lag behind genetic discovery research, there are numerous opportunities for future communications research to consider how emerging genomic discovery might be applied in the context of health promotion and disease prevention.

Article

Using Quotations in Health and Risk Message Design  

Rhonda Gibson

Quotations, something that a person says or writes that is then used by someone else in another setting, have long been a staple of news stories. Reporters use quotations—both direct and paraphrased—to document facts, opinions, and emotions from human and institutional sources. From a journalistic standpoint, quotations are beneficial because they add credibility to a news report and allow readers/viewers to consider the source of information when evaluating its usefulness. Quotations are also valued because they are seen as adding a “human” element to a news report by allowing sources to present information in their own words—thus providing an unfiltered first-person perspective that audiences may find more compelling and believable than a detached third-person summary. Research into the effects of news report quotations has documented what journalists long assumed: Quotations, especially direct quotes using the exact words of a speaker, draw the attention of news consumers and are often attended to in news stories more than statistical information. Studies show that the first-person perspective is considered both more vivid and more credible, a phenomenon that newspaper and website designers often capitalize on through the use of graphic elements such as the extracted quote. Quotations in news stories have also been found to serve as a powerful persuasive tool with the ability to influence perception of an issue even in the face of contradictory statistical information. This is especially true when the topic under consideration involves potential risk. Direct quotations from individuals who perceive high levels of risk in a situation can sway audience perceptions, regardless of whether the quoted risk assessments are supported by reality. The power of quotations remains strong in other forms of communication involving risk, such as public service, health-related, or promotional messages. The vivid, first-person nature of quotes draws the attention of audiences and makes the quoted information more likely to be remembered and to influence future judgments regarding the issue in question. This presents the message creator, whether it be a journalist or other type of communicator, with a powerful tool that should be constructed and deployed purposefully in an effort to leave audiences with an accurate perception of the topic under consideration.

Article

Message Convergence Framework Applied to Health and Risk Messaging  

Kathryn E. Anthony, Timothy L. Sellnow, Steven J. Venette, and Sean P. Fourney

Much current scholarship in the realm of information processing and decision making, particularly in the context of health risks, is derived from the logical-empiricist paradigm, involving a strong focus on cognition, routes of psychological processing of messages, and message heuristics. The message convergence framework (MCF), derived heavily from the writings of Perelman and Olbrechts-Tyteca, contributes to this body of literature by emphasizing the fact that people make decisions on health risks while being exposed to arguments from multiple sources on the same topic. The MCF offers an explanation for how people reconcile myriad messages to arrive at decisions. MCF differs from other theories of message processing because of its distinct and unique focus on arguments, messages, and the ways various arguments interact to create “convergence” in individuals’ minds. The MCF focuses on the ways that multiple messages converge to create meaning and influence in the minds of listeners. Convergence occurs when messages from multiple sources overlap in ways recognized by observers, creating perceptions of credibility and influencing their risk decisions. Perelman and Olbrechts-Tyteca explain that convergence occurs when “several distinct arguments lead to a single conclusion.” Individuals assess the strengths and weaknesses of the claims, and according to the scholars, the “strength” of the arguments “is almost always recognized.” Three key propositions focusing on message convergence articulate that audiences recognize message convergence, that they actively seek convergence in matters of concern, such as health risk, and that this convergence is potentially fleeting as new messages are introduced to the discussion. Conversely, Perelman and Olbrechts-Tyteca also discuss message divergence, and the rationale for wanting to intentionally create divergence among interacting arguments. Divergence is particularly appropriate in the realm of health and risk messages when scholars must challenge potentially harmful beliefs or correct misinformation. Some strategies for invoking divergence in include: dissociation, in which the speaker attempts to reframe the argument to create novel understandings; identification of the stock, hackneyed, and obsolete, where the speaker attempts to make existing claims appear commonplace or obsolete to the listener; refutation of fallacies, where the speaker points out the fallacious reasoning of the opponent; clash of interpretation, where the speaker publicly articulates that individuals have understood the convergence to mean different things; weakening through reaction, which involves the speaker’s attempting to incite a reactionary approach by the opponent; and finally, highlighting the consequence of invalid convergence, where the speaker describes the negative outcomes that may occur from following a false convergence based on incorrect information. For message design, environmental scanning enables scholars and practitioners to assess the messages in a particular health-risk context. This assessment can assist practitioners in emphasizing or building convergence among reputable sources and in introducing divergence in cases where misunderstanding or a lack of evidence has contributed to an unproductive perception of convergence. Ultimately, the MCF can assist practitioners in scanning their health-risk environments for opportunities to establish or bolster convergence based on credible evidence and for introducing divergence to challenge inaccurate or misleading interpretations and evidence.

Article

Communicating about Biofuels and Climate Change  

Michael A. Cacciatore

Biofuels are produced from biomass, which is any organic matter that can be burned or otherwise used to produce heat or energy. While not a new technology—biofuels have been around for well over 100 years—they are experiencing something of a renaissance in the United States and other countries across the globe. Today, biofuels have become the single most common alternative energy source in the U.S. transportation sector with billions of gallons of the fuel produced annually. The expansion of the bio-based economy in recent years has been intertwined with mounting concerns about environmental pollution and the accumulation of carbon dioxide (CO2) in the earth’s atmosphere. In the United States, for example, biofuels mandates have been championed as key to solving not only the country’s increasing energy demand problems and reliance on foreign oil, but also growing fears about global climate change. Of course, the use of biomass and biofuels to combat global climate change has been highly controversial. While proponents argue that biofuels burn cleaner than gasoline, research has suggested that any reductions in CO2 emissions are offset by land use considerations and the energy required in the biofuels-production process. How publics perceive of climate change as a problem and the use of biomass and biofuels as potential solutions will go a long way toward determining the policies that government’s implement to address this issue.

Article

Disproportionate Policy Response  

Moshe Maor

Disproportionate policy response is understood to be a lack of ‘fit’ or balance between the costs of a public policy and the benefits derived from this policy, and between policy ends and means. The study of this phenomenon and its two anchor concepts, namely, policy over- and underreaction, has been inspired by the insight that inefficiencies in the allocation of attention in policymaking leads policymakers to react disproportionately to information. This theory of information processing appears to be broadly accepted and has generated a large body of research on agenda setting. However, little attention has been devoted to actual policy over- and underreaction and how it affects the public. The latest developments are conceptual in nature and include a conceptualization and dimensionalization of policy over- and underreaction, as well as an early-stage development of a preference-driven approach to disproportionate policy response. These issues are fundamental to developing understanding of the formulation, implementation, and evaluation of disproportionate policy response. They are also valuable to those who want to better understand the processes through which policy over- and underreaction occur and are of considerable interest to practitioners who want to understand how to manage disproportionate policy responses more effectively. Although disproportionate policy response poses methodological challenges because it is time-bound, context-sensitive and has a problematic counterfactual (i.e., proportionate policy response), it deserves academic attention. This is because the insight of the punctuated equilibrium theory—that policy responses oscillate between periods of underreaction to the flow of information coming from the environment into the system and overreaction due to disproportionate information processing—implies that policy oscillation is the norm rather than the rarity. To probe research questions related to the topic at hand, disproportionate policy response can be measured as individuals’ perceptions of what they think about the proportionality of policy. Alternatively, scholars may employ vignette survey experiments, sophisticated cost-benefit analysis and a comparison of policy outcomes with (national or international) standards developed by experts. Scholars may also undertake experimental manipulation using risk unfolding over time, combined with varying types of warnings. The study of disproportionate policy response is a gateway to some of the most significant aspects of public policy. Global and domestic threats coupled with relatively skeptical publics about politicians and political institutions and rising negativity and populism in democratic politics imply that policy overshooting is increasingly required for the public to perceive policy action as sufficient and politicians as competent, at least in the short term. Not only has disproportionate policy response been a focal point for political actors seeking decisive and swift policy change in times of real or manufactured crisis or no change at all, but such action has time and time again also made a dramatic impact upon the direction and the character of policy and politics. Classic examples are the U.S. response to 9/11 and the federal response to Hurricane Katrina. So far the literature on policy change has not responded to the emergence of the stream of research aimed at fully understanding the complex phenomenon of disproportionate policy response, but a robust research agenda awaits those answering this article’s call for action.

Article

Prioritarianism  

Nils Holtug

Prioritarianism is a principle of distributive justice. Roughly, it states that we should give priority to the worse off in the distribution of advantages. This principle has received a great deal of attention in political theory since Derek Parfit first introduced the distinction between egalitarianism and prioritarianism in his Lindley Lecture, published in 1991. In the present article, prioritarianism is defined in terms of a number of structural features of the principle. These structural features are also used to distinguish between this principle and other distributive principles such as utilitarianism, egalitarianism, and leximin. Prioritarianism is mostly discussed as an axiological principle that orders outcomes with respect to their (moral) value, but it is also clarified how it can be incorporated in a criterion of right actions, choices, or policies. Furthermore, different aspects of the principle that need to be further specified to arrive at a full-fledged distributive theory are discussed, including the weights that give priority to the worse off, currency (what kind of advantages should be distributed), temporal unit (the temporal span in which one has to be worse off in order to be entitled to priority), scope (whether the principle applies globally or only domestically, and whether, for example, future generations and non-human animals are covered by the principle), and risk. For each aspect, different possible views are distinguished and discussed. Finally, it is discussed how prioritarianism may be justified, for example, by outlining and discussing the argument that, unlike certain other distribution-sensitive principles such as egalitarianism, prioritarianism is not vulnerable to the so-called “leveling down objection.”

Article

Genetically Modified Crops  

Glenn Davis Stone

In 1958, a Nobel laureate predicted that one day scientists would be able to use “biological engineering” to improve all species. Genetic modification of viruses and bacteria was performed in the early 1970s. Genetic modification of plants was announced in the early 1980s, followed by predictions of revolutionary improvements in agriculture. But nearly forty years later, the improvements brought by genetic modification are meager: few crops have been modified and 87 percent of all area planted to genetically modified (GM) crops contains traits for herbicide tolerance (HT), which increases use of herbicide but not productivity. The only other widely used modification, which causes plants to produce insecticide, has improved agriculture in some areas but not others. Debate on why genetic modification has fallen so short of expectations have centered on three factors. Public resistance to GM crops and foods is blamed for slow progress by some. Excessive regulation is cited by some, especially those involved in the development of GM crops. But the main factor has been patent regimes that concentrate the development of marketable GM crops in the hands of a small number of companies that hold large patent portfolios and that can afford to enforce the patents. New technologies for genetic modification such as CRISPR-Cas9 are being heralded as offering revolutionary change in agriculture, much as genetic modification was in the 1980s.

Article

Climate Change Adaptation  

Philipp Schmidt-Thomé

Climate change adaptation is the ability of a society or a natural system to adjust to the (changing) conditions that support life in a certain climate region, including weather extremes in that region. The current discussion on climate change adaptation began in the 1990s, with the publication of the Assessment Reports of the Intergovernmental Panel on Climate Change (IPCC). Since the beginning of the 21st century, most countries, and many regions and municipalities have started to develop and implement climate change adaptation strategies and plans. But since the implementation of adaptation measures must be planned and conducted at the local level, a major challenge is to actually implement adaptation to climate change in practice. One challenge is that scientific results are mainly published on international or national levels, and political guidelines are written at transnational (e.g., European Union), national, or regional levels—these scientific results must be downscaled, interpreted, and adapted to local municipal or community levels. Needless to say, the challenges for implementation are also rooted in a large number of uncertainties, from long time spans to matters of scale, as well as in economic, political, and social interests. From a human perspective, climate change impacts occur rather slowly, while local decision makers are engaged with daily business over much shorter time spans. Among the obstacles to implementing adaptation measures to climate change are three major groups of uncertainties: (a) the uncertainties surrounding the development of our future climate, which include the exact climate sensitivity of anthropogenic greenhouse gas emissions, the reliability of emission scenarios and underlying storylines, and inherent uncertainties in climate models; (b) uncertainties about anthropogenically induced climate change impacts (e.g., long-term sea level changes, changing weather patterns, and extreme events); and (c) uncertainties about the future development of socioeconomic and political structures as well as legislative frameworks. Besides slow changes, such as changing sea levels and vegetation zones, extreme events (natural hazards) are a factor of major importance. Many societies and their socioeconomic systems are not properly adapted to their current climate zones (e.g., intensive agriculture in dry zones) or to extreme events (e.g., housing built in flood-prone areas). Adaptation measures can be successful only by gaining common societal agreement on their necessity and overall benefit. Ideally, climate change adaptation measures are combined with disaster risk reduction measures to enhance resilience on short, medium, and long time scales. The role of uncertainties and time horizons is addressed by developing climate change adaptation measures on community level and in close cooperation with local actors and stakeholders, focusing on strengthening resilience by addressing current and emerging vulnerability patterns. Successful adaptation measures are usually achieved by developing “no-regret” measures, in other words—measures that have at least one function of immediate social and/or economic benefit as well as long-term, future benefits. To identify socially acceptable and financially viable adaptation measures successfully, it is useful to employ participatory tools that give all involved parties and decision makers the possibility to engage in the process of identifying adaptation measures that best fit collective needs.

Article

Water Resources Planning Under (Deep) Uncertainty  

Riddhi Singh

Public investments in water infrastructure continue to grow where developed countries prioritize investments in operation and maintenance while developing countries focus on infrastructure expansion. The returns from these investments are contingent on carefully assessed designs and operating strategies that consider the complexities inherent in water management problems. These complexities arise due to several factors, including, but not limited to, the presence of multiple stakeholders with potentially conflicting preferences, lack of knowledge about appropriate systems models or parameterizations, and large uncertainties regarding the evolution of future conditions that will confront these projects. The water resources planning literature has therefore developed a variety of approaches for a quantitative treatment of planning problems. Beginning in the mid-20th century, quantitative design evaluations were based on a stochastic treatment of uncertainty using probability distributions to determine expected costs or risk of failure. Several simulation–optimization frameworks were developed to identify optimal designs with techniques such as linear programming, dynamic programming, stochastic dynamic programming, and evolutionary algorithms. Uncertainty was incorporated within existing frameworks using probability theory, using fuzzy theory to represent ambiguity, or via scenario analysis to represent discrete possibilities for the future. As the effects of climate change became palpable and rapid socioeconomic transformations emerged as the norm, it became evident that existing techniques were not likely to yield reliable designs. The conditions under which an optimal design is developed and tested may differ significantly from those that it will face during its lifetime. These uncertainties, wherein the analyst cannot identify the distributional forms of parameters or the models and forcing variables, are termed “deep uncertainties.” The concept of “robustness” was introduced around the 1980s to identify designs that trade off optimality with reduced sensitivity to such assumptions. However, it was not until the 21st century that robustness analysis became mainstream in water resource planning literature and robustness definitions were expanded to include preferences of multiple actors and sectors as well as their risk attitudes. Decision analytical frameworks that focused on robustness evaluations included robust decision-making, decision scaling, multi-objective robust decision-making, info-gap theory, and so forth. A complementary set of approaches focused on dynamic planning that allowed designs to respond to new information over time. Examples included adaptive policymaking, dynamic adaptive policy pathways, and engineering options analysis, among others. These novel frameworks provide a posteriori decision support to planners aiding in the design of water resources projects under deep uncertainties.