121-140 of 247 Results

Article

Courtney Plante, Johnie J. Allen, and Craig A. Anderson

Given the dire nature of many researchers’ predictions about the effects of global climate change (e.g., rising sea levels, droughts, more extreme weather), it comes as little surprise that less attention has been paid to the subtler, less direct outcomes of rapid climate change: psychological, sociological, political, and economic effects. In this chapter we explore one such outcome in particular: the effects of rapid climate change on aggression. We begin by exploring the potential for climate change to directly affect aggression in individuals, focusing on research showing the relationship between uncomfortably hot ambient temperature and aggression. Next, we review several lines of research illustrating ways that climate change can indirectly increase aggression in individuals. We then shift our focus from individuals to the effects of climate change on group-level aggression. We finish by addressing points of contention, including the challenge that the effects of climate change on aggression are too remote and too small to be considered relevant.

Article

Early research on the relationship between social media use and its relationship to climate change opinion, knowledge, and behavior suggests several positive impacts. Social media encourages greater knowledge of climate change, mobilization of climate change activists, space for discussing the issue with others, and online discussions that frame climate change as a negative for society. Social media, however, does provide space for framing climate change skeptically and activating those with a skeptical perspective of climate change. Further examination of the relationship between social media use and climate change perceptions is warranted.

Article

For the general public, the news media are an important source of information about climate change. They have significant potential to influence public understanding and perceptions of the issue. Television news, because of its visual immediacy and authoritative presentation, is likely to be particularly influential. Numerous studies have shown that television news can affect public opinion directly and indirectly through processes such as agenda setting and framing. Moreover, even in a fragmented media environment largely dominated by online communication, television remains a prominent medium through which citizens follow news about science issues. Given this, scholars over the last several decades have endeavored to map the content of television news reporting on climate change and its effects on public opinion and knowledge. Results from this research suggest that journalists’ adherence to professional norms such as balance, novelty, dramatization, and personalization, along with economic pressures and sociopolitical influences, have produced inaccuracies and distortions in television news coverage of climate change. For example, content analyses have found that U.S. network television news stories tend to over-emphasize dramatic impacts and imagery, conflicts between political groups and personalities, and the uncertainty surrounding climate science and policy. At the same time, those skeptical of climate change have been able to exploit journalists’ norms of balance and objectivity to amplify their voices in television coverage of climate change. In particular, the increasingly opinionated 24-hour cable news networks have become a megaphone for ideological viewpoints on climate change. In the United States, a coordinated climate denial movement has used Fox News to effectively spread its message discrediting climate science. Coverage on Fox News is overwhelmingly dismissive of climate change and disparaging toward climate science and scientists. Coverage on CNN and MSNBC is more accepting of climate change; however, while MSNBC tends to vilify the conservative opposition to climate science and policy, and occasionally exaggerates the impacts of climate change, CNN sends more mixed signals. Survey and experimental analyses indicate that these trends in television news coverage of climate change have important effects on public opinion and may, in particular, fuel confusion and apathy among the general U.S. public and foster opinion extremity among strong partisans.

Article

As a unique and high gigantic plateau, the Tibetan Plateau (TP) is sensitive and vulnerable to global climate change, and its climate change tendencies and the corresponding impact on regional ecosystems and water resources can provide an early alarm for global and mid-latitude climate changes. Growing evidence suggests that the TP has experienced more significant warming than its surrounding areas during past decades, especially at elevations higher than 4 km. Greater warming at higher elevations than at lower elevations has been reported in several major mountainous regions on earth, and this interesting phenomenon is known as elevation-dependent climate change, or elevation-dependent warming (EDW). At the beginning of the 21st century, Chinese scholars first noticed that the TP had experienced significant warming since the mid-1950s, especially in winter, and that the latest warming period in the TP occurred earlier than enhanced global warming since the 1970s. The Chinese also first reported that the warming rates increased with the elevation in the TP and its neighborhood, and the TP was one of the most sensitive areas to global climate change. Later, additional studies, using more and longer observations from meteorological stations and satellites, shed light on the detailed characteristics of EDW in terms of mean, minimum, and maximum temperatures and in different seasons. For example, it was found that the daily minimum temperature showed the most evident EDW in comparison to the mean and daily maximum temperatures, and EDW is more significant in winter than in other seasons. The mean daily minimum and maximum temperatures also maintained increasing trends in the context of EDW. Despite a global warming hiatus since the turn of the 21st century, the TP exhibited persistent warming from 2001 to 2012. Although EDW has been demonstrated by more and more observations and modeling studies, the underlying mechanisms for EDW are not entirely clear owing to sparse, discontinuous, and insufficient observations of climate change processes. Based on limited observations and model simulations, several factors and their combinations have been proposed to be responsible for EDW, including the snow-albedo feedback, cloud-radiation effects, water vapor and radiative fluxes, and aerosols forcing. At present, however, various explanations of the mechanisms for EDW are mainly derived from model-based research, lacking more solid observational evidence. Therefore, to comprehensively understand the mechanisms of EDW, a more extensive and multiple-perspective climate monitoring system is urgently needed in the areas of the TP with high elevations and complex terrains. High-elevation climate change may have resulted in a series of environmental consequences, such as vegetation changes, permafrost melting, and glacier shrinkage, in mountainous areas. In particular, the glacial retreat could alter the headwater environments on the TP and the hydrometeorological characteristics of several major rivers in Asia, threatening the water supply for the people living in the adjacent countries. Taking into account the climate-model projections that the warming trend will continue over the TP in the coming decades, this region’s climate change and the relevant environmental consequences should be of great concern to both scientists and the general public.

Article

Maxwell Boykoff and Gesa Luedecke

During the past three decades, elite news media have become influential translators of climate change linking science, policy, and the citizenry. Historical trends in public discourse—shaped in significant part by elite media—demonstrate news media’s critical role in shaping public perception and the level of concern towards climate change. Media representations of climate change and global warming are embedded in social, cultural, political, and economic dimensions that influence individual-level processes such as everyday journalistic practices. Media have a strong influence on policy decision-making, attitudes, perspectives, intentions, and behavioral change, but those connections can be challenging to pinpoint; consequently, examinations of elite news coverage of climate change, particularly in recent decades, have sought to gain a stronger understanding of these complex and dynamic webs of interactions. In so doing, research has more effectively traced how media have taken on varied roles in the climate change debate, from watch dogs to lap dogs to guard dogs in the public sphere. Within these areas of research, psychological aspects of media influence have been relatively underemphasized. However, interdisciplinary and problem-focused research investigations of elite media coverage stand to advance considerations of public awareness, discourse, and engagement. Elite news media critically contribute to public discourse and policy priorities through their “mediating” and interpretative influences. Therefore, a review of examinations of these dynamics illuminate the bridging role of elite news coverage of climate change between formal science and policy, and everyday citizens in the public sphere.

Article

George Adamson

The El Niño Southern Oscillation is considered to be the most significant form of “natural” climate variability, although its definition and the scientific understanding of the phenomenon are continually evolving. Since its first recorded usage in 1891, the meaning of “El Niño” has morphed from a regular local current affecting coastal Peru, to an occasional Pacific-wide phenomenon that modifies weather patterns throughout the world, and finally to a diversity of weather patterns that share similarities in Pacific heating and changes in trade-wind intensity, but exhibit considerable variation in other ways. Since the 1960s El Niño has been associated with the Southern Oscillation, originally defined as a statistical relationship in pressure patterns across the Pacific by the British-Indian scientist Gilbert Walker. The first unified model for the El Niño-Southern Oscillation (ENSO) was developed by Jacob Bjerknes in 1969 and it has been updated several times since, but no simple model yet explains apparent diversity in El Niño events. ENSO forecasting is considered a success, but each event still displays surprising characteristics.

Article

Aristita Busuioc and Alexandru Dumitrescu

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Climate Science. Please check back later for the full article. The concept of statistical downscaling or empirical-statistical downscaling became a distinct and important scientific approach in climate science in recent decades, when the climate change issue and assessment of climate change impact on various social and natural systems have become international challenges. Global climate models are the best tools for estimating future climate conditions. Even if improvements can be made in state-of-the art global climate models, in terms of spatial resolution and their performance in simulation of climate characteristics, they are still skillful only in reproducing large-scale feature of climate variability, such as global mean temperature or various circulation patterns (e.g., the North Atlantic Oscillation). However, these models are not able to provide reliable information on local climate characteristics (mean temperature, total precipitation), especially on extreme weather and climate events. The main reason for this failure is the influence of local geographical features on the local climate, as well as other factors related to surrounding large-scale conditions, the influence of which cannot be correctly taken into consideration by the current dynamical global models. Impact models, such as hydrological and crop models, need high resolution information on various climate parameters on the scale of a river basin or a farm, scales that are not available from the usual global climate models. Downscaling techniques produce regional climate information on finer scale, from global climate change scenarios, based on the assumption that there is a systematic link between the large-scale and local climate. Two types of downscaling approaches are known: a) dynamical downscaling is based on regional climate models nested in a global climate model; and b) statistical downscaling is based on developing statistical relationships between large-scale atmospheric variables (predictors), available from global climate models, and observed local-scale variables of interest (predictands). Various types of empirical-statistical downscaling approaches can be placed approximately in linear and nonlinear groupings. The empirical-statistical downscaling techniques focus more on details related to the nonlinear models—their validation, strengths, and weaknesses—in comparison to linear models or the mixed models combining the linear and nonlinear approaches. Stochastic models can be applied to daily and sub-daily precipitation in Romania, with a comparison to dynamical downscaling. Conditional stochastic models are generally specific for daily or sub-daily precipitation as predictand. A complex validation of the nonlinear statistical downscaling models, selection of the large-scale predictors, model ability to reproduce historical trends, extreme events, and the uncertainty related to future downscaled changes are important issues. A better estimation of the uncertainty related to downscaled climate change projections can be achieved by using ensembles of more global climate models as drivers, including their ability to simulate the input in downscaling models. Comparison between future statistical downscaled climate signals and those derived from dynamical downscaling driven by the same global model, including a complex validation of the regional climate models, gives a measure of the reliability of downscaled regional climate changes.

Article

Jin-Song von Storch

The energetics considerations based on Lorenz’s available potential energy A focus on identification and quantification of processes capable of converting external energy sources into the kinetic energy of atmospheric and oceanic general circulations. Generally, these considerations consist of: (a) identifying the relevant energy compartments from which energy can be converted against friction to kinetic energy of motions of interests; (b) formulating for these energy compartments budget equations that describe all possible energy pathways; and (c) identifying the dominant energy pathways using realistic data. In order to obtain a more detailed description of energy pathways, a partitioning of motions, for example, into a “mean” and an “eddy” component, or into a diabatic and an adiabatic component, is used. Since the budget equations do not always suggest the relative importance of all possible pathways, often not even the directions, data that describe the atmospheric and the oceanic state in a sufficiently accurate manner are needed for evaluating the energy pathways. Apart from the complication due to different expressions of A , ranging from the original definition by Lorenz in 1955 to its approximations and to more generally defined forms, one has to balance the complexity of the respective budget equations that allows the evaluation of more possible energy pathways, with the quality of data available that allows sufficiently accurate estimates of energy pathways. With regard to the atmosphere, our knowledge, as inferred from the four-box Lorenz energy cycle, has consolidated in the last two decades, by, among other means, using data assimilation products obtained by combining observations with realistic atmospheric general circulation models (AGCMs). The eddy kinetic energy, amounting to slightly less than 50% of the total kinetic energy, is supported against friction through a baroclinic pathway “fueled” by the latitudinally dependent diabatic heating. The mean kinetic energy is supported against friction by converting eddy kinetic energy via inverse cascades. For the ocean, our knowledge is still emerging. The description through the four-box Lorenz energy cycle is approximative and was only estimated from a simulation of a 0 . 1 ° oceanic general circulation models (OGCM) realistically forced at the sea surface, rather than from a data assimilation product. The estimates obtained so far suggest that the oceanic eddy kinetic energy, amounting almost 75% of the total oceanic kinetic energy, is supported against friction through a baroclinic pathway similar to that in the atmosphere. However, the oceanic baroclinic pathway is “fueled” to a considerable extent by converting mean kinetic energy supported by winds into mean available potential energy. Winds are also the direct source of the kinetic energy of the mean circulation, without involving noticeable inverse cascades from transients, at least not for the ocean as a whole. The energetics of oceanic general circulation can also be examined by separating diabatic from adiabatic processes. Such a consideration is thought to be more appropriate for understanding the energetics of the oceanic meridional overturning circulation (MOC), since this circulation is sensitive to density changes induced by diabatic mixing. Further work is needed to quantify the respective energy pathways using realistic data.

Article

Stabilizing atmospheric greenhouse gases will require very large reductions in energy-related carbon dioxide emissions. This can be achieved only through continuous innovation, aggressive and ongoing. Fast-paced innovation, in turn, depends on rapid and widespread diffusion, adoption, adaptation—in short, on technological learning. These processes are integrally linked, as virtuous circles, through feedback loops embedded in economic markets. The overall dynamics are fundamentally incremental. Pundits and policymakers, nonetheless, sometimes seem to hope that “breakthroughs” will emerge to sweep existing energy technologies aside. Such hopes are misplaced, for two reasons. If breakthroughs are construed as something “new under the sun,” they are rare and unpredictable, and policymakers have few tools to foster them. Energy technologies, after all, have been intensively explored over the past two centuries: the physical constraints are well understood and there are few reasons to expect research to lead to anything fundamentally new. Infant technologies, second, tend to perform poorly, and to be quite costly. Improvements come over time though technological learning. Inputs to this sort of learning range from field service experience to “just-in-time” research. Economic competition provides much of the driving force. The dynamics just sketched are broadly representative of the evolutionary paths traced by past energy technologies—wind and steam power, gas turbines, nuclear power, and solar photovoltaic (PV) cells and systems. Similar paths will be followed if prospective innovations such as carbon capture and storage, small nuclear reactors, or schemes for tapping the energy of the world’s oceans begin to mature and diffuse. Over the next several decades, the world should expect to work with existing technologies in various stages of maturation that can and will—because this is inherent in the process of innovation—advance on technical measures of performance (e.g., energy conversion efficiency) and come down in costs (in most cases) through continuous improvement. This sort of innovation is first and foremost the work of profit-seeking businesses, enterprises that conceive, develop, introduce, and market new technologies. These firms exploit publically funded R&D; just as important historically, government procurements have created initial markets, including the first PV cells and also the gas turbines that many utilities now buy for electric power generation, the early versions of which were based on designs for military aircraft. A major task for energy-climate policy is to create similarly viable market segments in which new and emerging technologies can gain a foothold, as a number of governments have done for battery-electric vehicles. Direct and indirect subsidies—financial preferences as provided in some countries for battery-electric vehicles, and market set-asides, as for biofuels in Europe, Brazil, and the United States—insulate firms from potential competition, creating opportunities to push forward technologically, overcoming early handicaps, such as high costs and poor performance, associated with emerging technologies. The implication: Effective innovation policies must provide powerful incentives for profit-seeking businesses. This is true worldwide, although mechanisms will differ from country to country.

Article

Images are a key part of the climate change communication process. The diverse and interdisciplinary literature on how people engage with visual representations of climate change is reviewed. Images hold particular power for engaging people, as they hold three qualities that differ from other communication devices (such as words or text): they are analogical, they lack an explicit propositional syntax, and they are indexical. These qualities are explored in relation to climate change imagery. A number of visual tropes common to climate change communication (identifiable people; climate change impacts; energy, emissions and pollution; protest; scientific imagery) are examined and the evidence for each of these visual tropes in terms of how they engage particular audiences is reviewed. Two case studies, of polar bear imagery and the “hockey stick” graph image, critically examine iconic imagery associated with climate change, and how and why these types of images may (dis)engage audiences. Six best-practice guidelines for visual climate change communication are presented and three areas for further research in this nascent field are suggested.

Article

Although there is an abundance of social scientific research focused on public opinion and climate change, there remains much to learn about how individuals come to understand, feel, and behave relative to this issue. Efforts to understand these processes are commonly directed toward media depictions, because media represent a primary conduit through which people encounter information about climate change. The majority of research in this area has focused on news media portrayals of climate change. News media depictions, however, represent only a part of the media landscape, and a relatively small but growing body of work has focused on examining portrayals of climate change in entertainment media (i.e., films, television programs, etc.) and their implications. This article provides a comprehensive overview of this area of research, summarizing what is currently known about portrayals of climate change in entertainment media, the individual-level effects of these portrayals, and areas ripe for future research. Our overview suggests that the extant work has centered primarily on a small subset of high-profile climate change films. Examination of the content of these films has been mostly rhetorical and has often presumed negative audience effects. Studies that specifically set out to explore possible effects have often unearthed evidence suggesting short-term contributions to viewers’ perceptions of climate change, specifically in terms of heightened awareness, concern, and motivation. Improving the breadth and depth of research in this area, we contend, can stem from more robust theorizing, analyses that focus on a more diverse menu of entertainment media and the interactions among them, and increasingly complex analytical efforts to capture long-term effects.

Article

Over the last decade, scholars have devoted significant attention to making climate change communication more effective but less attention to ensuring that it is ethical. This neglect risks blurring the distinction between persuasion and manipulation, generating distrust among audiences, and obscuring the conceptual resources needed to guide communicators. Three prevailing approaches to moral philosophy can illuminate various ethical considerations involved in communicating climate change. Consequentialism, which evaluates actions as morally right or wrong according to their consequences, is the implicit moral framework shared by many social scientists and policymakers interested in climate change. While consequentialism rightly emphasizes the consequences of communication, its exclusive focus on the effectiveness of communication tends to obscure other moral considerations, such as what communicators owe to audiences as a matter of duty or respect. Deontology better captures these duties and provides grounds for communicating in ways that respect the rights of citizens to deliberate and decide how to act. But because deontology tends to cast ethics as an abstract set of universalizable principles, it often downplays the virtues of character needed to motivate action and apply principles across a variety of contexts. Virtue ethics seeks to overcome the limits of both consequentialism and deontology by focusing on the virtues that individuals and communities need to flourish. While virtue ethics is often criticized for failing to provide a concrete blueprint for action, its conception of moral development and thick vocabulary of virtues and vices offer a robust set of practical and conceptual resources for guiding the actions, attitudes, and relationships that characterize climate change communication. Ultimately, all three approaches highlight moral considerations that should inform the ethics of communicating climate change.

Article

The clean development mechanism of the Kyoto Protocol did not cover projects to reduce emissions from deforestation in developing countries. The reasons were in part technical (the difficulty of accounting for leakage) but mainly the result of fears of many Parties to the United Nations Framework Convention on Climate Change (UNFCCC) that this was a soft (and cheap) option that would discourage interventions for mitigation of emissions from fossil fuels. The alternative idea of a national, performance-based approach to reduced emissions from deforestation (RED) was first developed by research institutes in Brazil and proposed to the UNFCCC in a submission by Papua New Guinea and Costa Rica with technical support from the Environmental Defense Fund in 2005/2006. The idea was to reward countries financially for any decreases in annual rates of deforestation at a national level compared to a baseline that reflected historical rates of loss, through the sale of carbon credits, which as in the case of the Clean Development Mechanism (CDM) would be used as offsets by developed countries to meet their international obligations for emission reduction. REDD+ as it is now included in the Paris Agreement of 2015 (Article 5) has evolved from this rather simple concept into something much more complex and far-reaching. Degradation was added early on in the negotiation process (REDD) and very soon conservation, sustainable management of forests, and enhancement of forest carbon stocks were also included, hence the “+” in REDD+. The idea of “safeguards” (social, environmental) is now also firmly embedded, and the importance of non-carbon benefits is being underlined in official policy. In the absence of legally binding emission reduction targets in developed countries, the notion of a market approach and offsets is no longer the only or even the main route envisaged. Instead, countries are being encouraged to coordinate financial support from a range of public, private, bilateral, and multilateral sources. The mechanism is still, however, seen as a results-based instrument, although this may not be so clear in alternative policy approaches, such as “joint mitigation and adaptation,” also included in the Paris Agreement. Outside of the official policy negotiations, there has been a move away from operationalizing REDD+ as a purely forest-based mechanism toward developing a more holistic, landscape-based approach, given that many of the drivers of deforestation and degradation lie outside the forest itself. Countries in the vanguard of REDD+ implementation, such as Mexico, as well as several CGIAR organizations are visualizing REDD+ essentially as sustainable rural development. The central role of communities in the implementation of REDD+, and the importance of secure land tenure in this, have to a large extent been incorporated through the adoption of safeguards, but there remain a few lobbies of indigenous groups that are opposed to the whole nature of REDD+. The challenge of measurability, of both carbon and of non-carbon benefits, is addressed in this article.

Article

Classic paradigms describing meteorological phenomena and climate have changed dramatically over the last half-century. This is particularly true for the continent of Africa. Our understanding of its climate is today very different from that which prevailed as recently as the 1960s or 1970s. This article traces the development of relevant paradigms in five broad areas: climate and climate classification, tropical atmospheric circulation, tropical rain-bearing systems, climatic variability and change, and land surface processes and climate. One example is the definition of climate. Originally viewed as simple statistical averages, it is now recognized as an environmental variable with global linkages, multiple timescales of variability, and strong controls via earth surface processes. As a result of numerous field experiments, our understanding of tropical rainfall has morphed from the belief in the domination by local thunderstorms to recognition of vast systems on regional to global scales. Our understanding of the interrelationships with land surface processes has also changed markedly. The simple Charney hypothesis concerning albedo change and the related concept of desertification have given way to a broader view of land–atmosphere interaction. In summary, there has been a major evolution in the way we understand climate, climatic variability, tropical rainfall regimes and rain-bearing systems, and potential human impacts on African climate. Each of these areas has evolved in complexity and understanding, a result of an explosive growth in research and the availability of such investigative tools as satellites, computers, and numerical models.

Article

Joseph P. Reser and Graham L. Bradley

There is a strong view among climate change researchers and communicators that the persuasive tactic of arousing fear in order to promote precautionary motivation and behavior is neither effective nor appropriate in the context of climate change communication and engagement. Yet the modest research evidence that exists with respect to the use of fear appeals in communicating climate change does not offer adequate empirical evidence—either for or against the efficacy of fear appeals in this context—nor would such evidence adequately address the issue of the appropriateness of fear appeals in climate change communication. Extensive research literatures addressing preparedness, prevention, and behavior change in the areas of public health, marketing, and risk communication generally nonetheless provide consistent empirical support for the qualified effectiveness of fear appeals in persuasive social influence communications and campaigns. It is also noteworthy that the language of climate change communication is typically that of “communication and engagement,” with little explicit reference to targeted social influence or behavior change, although this is clearly implied. Hence underlying and intertwined issues here are those of cogent arguments versus largely absent evidence, and effectiveness as distinct from appropriateness. These matters are enmeshed within the broader contours of the contested political, social, and environmental, issues status of climate change, which jostle for attention in a 24/7 media landscape of disturbing and frightening communications concerning the reality, nature, progression, and implications of global climate change. All of this is clearly a challenge for evaluation research attempting to examine the nature and effectiveness of fear appeals in the context of climate change communication, and for determining the appropriateness of designed fear appeals in climate change communications intended to both engage and influence individuals, communities, and “publics” with respect to the ongoing threat and risks of climate change. There is an urgent need to clearly and effectively communicate the full nature and implications of climate change, in the face of this profound risk and rapidly unfolding reality. All such communications are, inherently, frightening warning messages, quite apart from any intentional fear appeals. How then should we put these arguments, evidence, and challenges “on the table” in our considerations and recommendations for enhancing climate change communication—and addressing the daunting and existential implications of climate change?

Article

Forecasting severe convective weather remains one of the most challenging tasks facing operational meteorology today, especially in the mid-latitudes, where severe convective storms occur most frequently and with the greatest impact. The forecast difficulties reflect, in part, the many different atmospheric processes of which severe thunderstorms are a by-product. These processes occur over a wide range of spatial and temporal scales, some of which are poorly understood and/or are inadequately sampled by observational networks. Therefore, anticipating the development and evolution of severe thunderstorms will likely remain an integral part of national and local forecasting efforts well into the future. Modern severe weather forecasting began in the 1940s, primarily employing the pattern recognition approach throughout the 1950s and 1960s. Substantial changes in forecast approaches did not come until much later, however, beginning in the 1980s. By the start of the new millennium, significant advances in the understanding of the physical mechanisms responsible for severe weather enabled forecasts of greater spatial and temporal detail. At the same time, technological advances made available model thermodynamic and wind profiles that supported probabilistic forecasts of severe weather threats. This article provides an updated overview of operational severe local storm forecasting, with emphasis on present-day understanding of the mesoscale processes responsible for severe convective storms, and the application of recent technological developments that have revolutionized some aspects of severe weather forecasting. The presentation, nevertheless, notes that increased understanding and enhanced computer sophistication are not a substitute for careful diagnosis of the current meteorological environment and an ingredients-based approach to anticipating changes in that environment; these techniques remain foundational to successful forecasts of tornadoes, large hail, damaging wind, and flash flooding.

Article

Cumulus clouds are pervasive on earth, and play important roles in the transfer of energy through the atmosphere. Under certain conditions, shallow, nonprecipitating cumuli may grow vertically to occupy a significant depth of the troposphere, and subsequently may evolve into convective storms. The qualifier “convective” implies that the storms have vertical accelerations that are driven primarily, though not exclusively, by buoyancy over a deep layer. Such buoyancy in the atmosphere arises from local density variations relative to some base state density; the base state is typically idealized as a horizontal average over a large area, which is also considered the environment. Quantifications of atmospheric buoyancy are typically expressed in terms of temperature and humidity, and allow for an assessment of the likelihood that convective clouds will form or initiate. Convection initiation is intimately linked to existence of a mechanism by which air is vertically lifted to realize this buoyancy and thus accelerations. Weather fronts and orography are the canonical lifting mechanisms. As modulated by an ambient or environmental distribution of temperature, humidity, and wind, weather fronts also facilitate the transition of convective clouds into storms with locally heavy rain, lightning, and other possible hazards. For example, in an environment characterized by winds that are weak and change little with distance above the ground, the storms tend to be short lived and benign. The structure of the vertical drafts and other internal storm processes under weak wind shear—i.e., a small change in the horizontal wind over some vertical distance—are distinct relative to those when the environmental wind shear is strong. In particular, strong wind shear in combination with large buoyancy favors the development of squall lines and supercells, both of which are highly coherent storm types. Besides having durations that may exceed a few hours, both of these storm types tend to be particularly hazardous: squall lines are most apt to generate swaths of damaging “straight-line” winds, and supercells spawn the most intense tornadoes and are responsible for the largest hail. Methods used to predict convective-storm hazards capitalize on this knowledge of storm formation and development.

Article

Mike S. Schäfer and Saffron O'Neill

Framing—selecting certain aspects of a given issue and making them more salient in communication in order to “frame” the issue in a specific way—is a key concept in the study of communication. At the same time, it has been used very differently in scholarship, leading some to declare it a “fractured paradigm,” or an idea whose usefulness has expired. In studies of climate change communication, frame analyses have been used numerous times and in various ways, from formal framing approaches (e.g., episodic vs. thematic framing) to topical frames (both generic and issue-specific). Using methodological approaches of frame analysis from content analysis over discourse analysis and qualitative studies to experimental research, this research has brought valuable insights into media portrayals of climate change in different countries and their effects on audiences—even though it still has limitations that should be remedied in future research.

Article

Although future generations—starting with today’s youth—will bear the brunt of negative effects related to climate change, some research suggests that they have little concern about climate change nor much intention to take action to mitigate its impacts. One common explanation for this indifference and inaction is lack of scientific knowledge. It is often said that youth do not understand the science; therefore, they are not concerned. Indeed, in science educational research, numerous studies catalogue the many misunderstandings students have about climate science. However, this knowledge-deficit perspective is not particularly informative in charting a path forward for climate-change education. This path is important because climate science will be taught in more depth as states adopt the Next Generation Science Standards within the next few years. How do we go about creating the educational experiences that students need to be able to achieve climate-science literacy and feel as if they could take action? First, the literature base in communication, specifically about framing must be considered, to identify potentially more effective ways to craft personally relevant and empowering messages for students within their classrooms.

Article

The warming of the global climate is expected to continue in the 21st century, although the magnitude of change depends on future anthropogenic greenhouse gas emissions and the sensitivity of climate to them. The regional characteristics and impacts of future climate change in the Baltic Sea countries have been explored since at least the 1990s. Later research has supported many findings from the early studies, but advances in understanding and improved modeling tools have made the picture gradually more comprehensive and more detailed. Nevertheless, many uncertainties still remain. In the Baltic Sea region, warming is likely to exceed its global average, particularly in winter and in the northern parts of the area. The warming will be accompanied by a general increase in winter precipitation, but in summer, precipitation may either increase or decrease, with a larger chance of drying in the southern than in the northern parts of the region. Despite the increase in winter precipitation, the amount of snow is generally expected to decrease, as a smaller fraction of the precipitation falls as snow and midwinter snowmelt episodes become more common. Changes in windiness are very uncertain, although most projections suggest a slight increase in average wind speed over the Baltic Sea. Climatic extremes are also projected to change, but some of the changes will differ from the corresponding change in mean climate. For example, the lowest winter temperatures are expected to warm even more than the winter mean temperature, and short-term summer precipitation extremes are likely to become more severe, even in the areas where the mean summer precipitation does not increase. The projected atmospheric changes will be accompanied by an increase in Baltic Sea water temperature, reduced ice cover, and, according to most studies, reduced salinity due to increased precipitation and river runoff. The seasonal cycle of runoff will be modified by changes in precipitation and earlier snowmelt. Global-scale sea level rise also will affect the Baltic Sea, but will be counteracted by glacial isostatic adjustment. According to most projections, in the northern parts of the Baltic Sea, the latter will still dominate, leading to a continued, although decelerated, decrease in relative sea level. The changes in the physical environment and climate will have a number of environmental impacts on, for example, atmospheric chemistry, freshwater and marine biogeochemistry, ecosystems, and coastal erosion. However, future environmental change in the region will be affected by several interrelated factors. Climate change is only one of them, and in many cases its effects may be exceeded by other anthropogenic changes.