The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary.
The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks.
Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.
Article
Junsoo Lee, James E. Payne, and Md. Towhidul Islam
Article
Ian Parry
The window of opportunity for containing risks of dangerous instability in the global climate system is closing rapidly. The response of the international community is embedded in the 2015 Paris Agreement, signed by 195 parties. Implementing the mitigation pledges parties submitted for the agreement is an important first step, although an additional mechanism to coordinate and scale up mitigation policy at the international level will likely be needed. Carbon taxation, or similar pricing, has a pivotal role, providing across-the-board incentives for reducing emissions and the critical price signal for redirecting investment, but pricing has proved difficult politically. Analytical literature on carbon taxation provides practical guidance on the role of taxation in implementing the Paris Agreement and enhancing its acceptability.
Shifting taxes off labor and capital and onto carbon or fossil fuels can produce a “double dividend” by reducing environmental harm and lowering the burden broader taxes impose on the economy. Broader taxes both discourage work effort and investment and promote tax-sheltering behavior (e.g., activity in the informal sector). For various technical and practical reasons, however, it may not make sense to set the carbon tax rate above levels warranted on environmental grounds.
The literature emphasizes the general importance of using carbon pricing revenues to benefit the economy, for example, lowering burdensome taxes or funding productive investments. These economic benefits are forgone if instead carbon pricing revenues are given to households in lump-sum dividends. Where higher energy prices are subject to public acceptability constraints, a package of regulations or their fiscal equivalents (known as “feebates”) have an important role in reinforcing carbon pricing. Carbon mitigation can also produce important domestic environmental co-benefits, such as reductions in local air pollution mortality. Unilateral action may be in many countries’ own interests before even counting the global climate benefits.
Recent studies have quantified the carbon prices implicit in countries’ Paris mitigation pledges. These implicit prices differ widely across countries with the stringency of pledges and the responsiveness of emissions to pricing, underscoring the potential efficiency gains from some degree of price coordination at the international level. In fact, an international carbon price floor arrangement could be strikingly effective to the extent that it promotes more mitigation in key emerging market economies, such as China and India. The price floor need only cover a handful of large emitters, could be designed equitably with higher requirements for advanced countries, and could be designed flexibly to accommodate different policy approaches at the national level.
Domestically, policymakers need to develop comprehensive mitigation strategies, ideally with carbon pricing as the key element. These strategies need to distribute burdens equitably, assist vulnerable groups, and include supporting measures for investment and pricing for broader sources of greenhouse gases.
Article
Xiaoyu Li and Sathya Gopalakrishnan
The convergence of geophysical and economic forces that continuously influence environmental quality in the coastal zone presents a grand challenge for resource and environmental economists. To inform climate adaptation policy and identify pathways to sustainability, economists must draw from different lines of inquiry, including nonmarket valuation, quasi-experimental analyses, common-pool resource theory, and spatial-dynamic modeling of coupled coastal-economic systems. Theoretical and empirical contributions in valuing coastal amenities and risks help examine the economic impact of climate change on coastal communities and provide a key input to inform policy analysis. Co-evolution of community demographics, adaptation decisions, and the physical coastline can result in unintended consequences, like climate-induced migration, that impacts community composition after natural disasters. Positive and normative models of coupled coastline systems conceptualize the feedbacks between physical coastline dynamics and local community decisions as a dynamic geoeconomic resource management problem. There is a pressing need for interdisciplinary research across natural and social sciences to better understand climate adaptation and coastal resilience.
Article
Hao Liang and Luc Renneboog
Corporate social responsibility (CSR) refers to the incorporation of environmental, social, and governance (ESG) considerations into corporate management, financial decision-making, and investors’ portfolio decisions. Socially responsible firms are expected to internalize the externalities they create (e.g., pollution) and be accountable to shareholders and other stakeholders (employees, customers, suppliers, local communities, etc.). Rating agencies have developed firm-level measures of ESG performance that are widely used in the literature. However, these ratings show inconsistencies that result from the rating agencies’ preferences, weights of the constituting factors, and rating methodology.
CSR also deals with sustainable, responsible, and impact investing. The return implications of investing in the stocks of socially responsible firms include the search for an EGS factor and the performance of SRI funds. SRI funds apply negative screening (exclusion of “sin” industries), positive screening, and activism through engagement or proxy voting. In this context, one wonders whether responsible investors are willing to trade off financial returns with a “moral” dividend (the return given up in exchange for an increase in utility driven by the knowledge that an investment is ethical). Related to the analysis of externalities and the ethical dimension of corporate decisions is the literature on green financing (the financing of environmentally friendly investment projects by means of green bonds) and on how to foster economic decarbonization as climate change affects financial markets and investor behavior.
Article
Frederick van der Ploeg
The social rate of discount is a crucial driver of the social cost of carbon (SCC), that is, the expected present discounted value of marginal damages resulting from emitting one ton of carbon today. Policy makers should set carbon prices to the SCC using a carbon tax or a competitive permits market. The social discount rate is lower and the SCC higher if policy makers are more patient and if future generations are less affluent and policy makers care about intergenerational inequality. Uncertainty about the future rate of growth of the economy and emissions and the risk of macroeconomic disasters (tail risks) also depress the social discount rate and boost the SCC provided intergenerational inequality aversion is high. Various reasons (e.g., autocorrelation in the economic growth rate or the idea that a decreasing certainty-equivalent discount rate results from a discount rate with a distribution that is constant over time) are discussed for why the social discount rate is likely to decline over time. A declining social discount rate also emerges if account is taken from the relative price effects resulting from different growth rates for ecosystem services and of labor in efficiency units. The market-based asset pricing approach to carbon pricing is contrasted with a more ethical approach to policy making. Some suggestions for further research are offered.
Article
Jennifer L. Castle and David F. Hendry
Shared features of economic and climate time series imply that tools for empirically modeling nonstationary economic outcomes are also appropriate for studying many aspects of observational climate-change data. Greenhouse gas emissions, such as carbon dioxide, nitrous oxide, and methane, are a major cause of climate change as they cumulate in the atmosphere and reradiate the sun’s energy. As these emissions are currently mainly due to economic activity, economic and climate time series have commonalities, including considerable inertia, stochastic trends, and distributional shifts, and hence the same econometric modeling approaches can be applied to analyze both phenomena. Moreover, both disciplines lack complete knowledge of their respective data-generating processes (DGPs), so model search retaining viable theory but allowing for shifting distributions is important. Reliable modeling of both climate and economic-related time series requires finding an unknown DGP (or close approximation thereto) to represent multivariate evolving processes subject to abrupt shifts. Consequently, to ensure that DGP is nested within a much larger set of candidate determinants, model formulations to search over should comprise all potentially relevant variables, their dynamics, indicators for perturbing outliers, shifts, trend breaks, and nonlinear functions, while retaining well-established theoretical insights. Econometric modeling of climate-change data requires a sufficiently general model selection approach to handle all these aspects. Machine learning with multipath block searches commencing from very general specifications, usually with more candidate explanatory variables than observations, to discover well-specified and undominated models of the nonstationary processes under analysis, offers a rigorous route to analyzing such complex data. To do so requires applying appropriate indicator saturation estimators (ISEs), a class that includes impulse indicators for outliers, step indicators for location shifts, multiplicative indicators for parameter changes, and trend indicators for trend breaks. All ISEs entail more candidate variables than observations, often by a large margin when implementing combinations, yet can detect the impacts of shifts and policy interventions to avoid nonconstant parameters in models, as well as improve forecasts. To characterize nonstationary observational data, one must handle all substantively relevant features jointly: A failure to do so leads to nonconstant and mis-specified models and hence incorrect theory evaluation and policy analyses.
Article
The economy of territory that became the United States evolved dramatically from ca. 1000 ce to 1776. Before Europeans arrived, the spread of maize agriculture shifted economic practices in Indigenous communities. The arrival of Europeans, starting with the Spanish in the West Indies in 1492, brought wide-ranging change, including the spread of Old World infectious disease and the arrival of land- and resource-hungry migrants. Europeans, eager to extract material wealth, came to rely on the trade in enslaved Africans to produce profitable crops such as tobacco, rice, and sugar, and they maintained connections with Indigenous communities to sustain the fur trade. The declining number of Indigenous peoples, combined with growing numbers of those of European or African origin, altered the demographic profile of North America, particularly in the territory east of the Mississippi River. Over time, Europeans’ consumer choices expanded, though the wealth gap between white colonists grew, as did the economic gap between free colonists, on the one hand, and unfree Black and Native peoples on the other.
Article
Fabrice Etilé
The rise in obesity and other food-related chronic diseases has prompted public-health officials of local communities, national governments, and international institutions to pay attention to the regulation of food supply and consumer behavior. A wide range of policy interventions has been proposed and tested since the early 21st century in various countries. The most prominent are food taxation, health education, nutritional labeling, behavioral interventions at point-of-decision, advertising, and regulations of food quality and trade. While the standard neoclassical approach to consumer rationality provides limited arguments in favor of public regulations, the recent development of behavioral economics research extends the scope of regulation to many marketing practices of the food industry. In addition, behavioral economics provides arguments in favor of taxation, easy-to-use front-of-pack labels, and the use of nudges for altering consumer choices. A selective but careful review of the empirical literature on taxation, labeling, and nudges suggests that a policy mixing these tools may produce some health benefits. More specifically, soft-drink taxation, front-of-pack labeling policies, regulations of marketing practices, and eating nudges based on affect or behavior manipulations are often effective methods for reducing unhealthy eating.
The economic research faces important challenges. First, the lack of a proper control group and exogenous sources of variations in policy variables make evaluation very difficult. Identification is challenging as well, with data covering short time periods over which markets are observed around slowly moving equilibria. In addition, truly exogenous supply or demand shocks are rare events. Second, structural models of consumer choices cannot provide accurate assessment of the welfare benefits of public policies because they consider perfectly rational agents and often ignore the dynamic aspects of food decisions, especially consumer concerns over health. Being able to obtain better welfare evaluation of policies is a priority. Third, there is a lack of research on the food industry response to public policies. Some studies implement empirical industrial organization models to infer the industry strategic reactions from market data. A fruitful avenue is to extend this approach to analyze other key dimensions of industrial strategies, especially decisions regarding the nutritional quality of food. Finally, the implementation of nutritional policies yields systemic consequences that may be underestimated. They give rise to conflicts between public health and trade objectives and alter the business models of the food sector. This may greatly limit the external validity of ex-ante empirical approaches. Future works may benefit from household-, firm-, and product-level data collected in rapidly developing economies where food markets are characterized by rapid transitions, the supply is often more volatile, and exogenous shocks occur more frequently.
Article
Gerard J. van den Berg and Maarten Lindeboom
Modern-day famines are caused by unusual impediments or interventions in society, effectively imposing severe market restrictions and preventing the free movement of people and goods. Long-run health effects of exposure to famine are commonly studied to obtain insights into the long-run effects of malnutrition at early ages. This line of research has faced major methodological and data challenges. Recent research in various disciplines, such as economics, epidemiology, and demography, has made great progress in dealing with these issues. Malnutrition around birth affects a range of later-life individual outcomes, including health, educational, and economic outcomes.
Article
Katarzyna Maciejowska, Bartosz Uniejewski, and Rafal Weron
Forecasting electricity prices is a challenging task and an active area of research since the 1990s and the deregulation of the traditionally monopolistic and government-controlled power sectors. It is interdisciplinary by nature and requires expertise in econometrics, statistics or machine learning for developing well-performing predictive models, finance for understanding market mechanics, and electrical engineering for comprehension of the fundamentals driving electricity prices.
Although electricity price forecasting aims at predicting both spot and forward prices, the vast majority of research is focused on short-term horizons which exhibit dynamics unlike in any other market. The reason is that power system stability calls for a constant balance between production and consumption, while being dependent on weather (in terms of demand and supply) and business activity (in terms of demand only). The recent market innovations do not help in this respect. The rapid expansion of intermittent renewable energy sources is not offset by the costly increase of electricity storage capacities and modernization of the grid infrastructure.
On the methodological side, this leads to three visible trends in electricity price forecasting research. First, there is a slow but more noticeable tendency to consider not only point but also probabilistic (interval, density) or even path (also called ensemble) forecasts. Second, there is a clear shift from the relatively parsimonious econometric (or statistical) models toward more complex and harder to comprehend but more versatile and eventually more accurate statistical and machine learning approaches. Third, statistical error measures are regarded as only the first evaluation step. Since they may not necessarily reflect the economic value of reducing prediction errors, in recent publications they tend to be complemented by case studies comparing profits from scheduling or trading strategies based on price forecasts obtained from different models.
Article
Valentina Bosetti
To guide climate change policymaking, we need to understand how technologies and behaviors should be transformed to avoid dangerous levels of global warming and what the implications of failing to bring forward such transformation might be. Integrated assessment models (IAMs) are computational tools developed by engineers, earth and natural scientists, and economists to provide projections of interconnected human and natural systems under various conditions.
These models help researchers to understand possible implications of climate inaction. They evaluate the effects of national and international policies on global emissions and devise optimal emissions trajectories in line with long-term temperature targets and their implications for infrastructure, investment, and behavior. This research highlights the deep interconnection between climate policies and other sustainable development objectives.
Evolving and focusing on one or more of these key policy questions, the large family of IAMs includes a wide array of tools that incorporate multiple dimensions and advances from a range of scientific fields.
Article
Jevan Cherniwchan and M. Scott Taylor
Considerable progress has been made in understanding the relationship between international trade and the environment since Gene Grossman and Alan Krueger published their now seminal working paper examining the potential environmental effects of the North American Free Trade Agreement in 1991. Their work articulated a simple framework through which international trade and economic growth could affect the environment by impacting: the scale of economic activity (the scale effect), the composition of production across industries (the composition effect), or the emission intensity of individual industries (the technique effect). GK provided preliminary evidence of the relative magnitudes of the scale, composition and technique effects, and reached a striking conclusion: international trade would not necessarily harm the environment.
Much of the subsequent literature examining the effects of international trade and the environment has adopted Grossman and Krueger’s simple framework and builds directly from their initial foray into the area. We now have better empirical evidence of the relationship between economic growth and environmental quality, of how environmental regulations affect international trade and investment flows, and of the relative magnitudes of the scale, composition and technique effects.
Yet, the need for further progress remains along three key fronts. First, despite significant advances in our understanding of how economic growth affects environmental quality, evidence of the interaction between international trade, economic growth, and environmental outcomes remains scarce. Second, while a growing body of evidence suggests that environmental regulations significantly alter trade flows, it is still unclear if these policies have a larger or smaller effect than traditional determinants of comparative advantage. Third, although it is clear the technique effect is the primary driver of changes in pollution, evidence as to how trade has contributed to the technique effect is limited. Addressing these Three Remaining Challenges is necessary for assessing whether Grossman and Krueger’s conclusion that international trade need not necessarily harm the environment still holds today.
Article
Integrated assessment models (IAMs) of the climate and economy aim to analyze the impact and efficacy of policies that aim to control climate change, such as carbon taxes and subsidies. A major characteristic of IAMs is that their geophysical sector determines the mean surface temperature increase over the preindustrial level, which in turn determines the damage function. Most of the existing IAMs assume that all of the future information is known. However, there are significant uncertainties in the climate and economic system, including parameter uncertainty, model uncertainty, climate tipping risks, and economic risks. For example, climate sensitivity, a well-known parameter that measures how much the equilibrium temperature will change if the atmospheric carbon concentration doubles, can range from below 1 to more than 10 in the literature. Climate damages are also uncertain. Some researchers assume that climate damages are proportional to instantaneous output, while others assume that climate damages have a more persistent impact on economic growth. The spatial distribution of climate damages is also uncertain. Climate tipping risks represent (nearly) irreversible climate events that may lead to significant changes in the climate system, such as the Greenland ice sheet collapse, while the conditions, probability of tipping, duration, and associated damage are also uncertain. Technological progress in carbon capture and storage, adaptation, renewable energy, and energy efficiency are uncertain as well. Future international cooperation and implementation of international agreements in controlling climate change may vary over time, possibly due to economic risks, natural disasters, or social conflict. In the face of these uncertainties, policy makers have to provide a decision that considers important factors such as risk aversion, inequality aversion, and sustainability of the economy and ecosystem. Solving this problem may require richer and more realistic models than standard IAMs and advanced computational methods. The recent literature has shown that these uncertainties can be incorporated into IAMs and may change optimal climate policies significantly.
Article
Robert Mendelsohn
Emissions from greenhouse gases are predicted to cause climate to change. Increased solar radiation gradually warms the oceans, which leads to warmer climates. How much future climates will change depends on the cumulative emissions of greenhouse gases, which in turn depends on the magnitude of future economic growth. The global warming caused by humanmade emissions will likely affect many phenomena across the planet. The future damage from climate change is the net damage that these changes will cause to mankind.
Oceans are expected to expand with warmer temperatures, and glaciers and ice sheets are expected to melt, leading to sea level rise over time (a damage). Crops tend to have a hill-shaped relationship with temperature, implying that some farms will be hurt by warming and some farms will gain, depending on their initial temperature. Cooling expenditures are expected to increase (a damage), whereas heating expenditures are expected to fall (a benefit). Water is likely to become scarcer as the demand for water increases with temperature (a damage). Warming is expected to cause ecosystems to migrate poleward. Carbon fertilization is expected to cause forest ecosystems to become more productive, but forest fires are expected to be more frequent so that it is uncertain whether forest biomass will increase or decrease. The expected net effect of all these forest changes is an increase in timber supply (a benefit). It is not known how ecosystem changes will alter overall enjoyment of ecosystems. Warmer summer temperatures will cause health effects from heat waves (a damage), but even larger reductions in health effects from winter cold (a benefit). Large tropical cyclones are expected to get stronger, which will cause more damage from floods and high winds. Winter recreation based on snow will be harmed, but summer outdoor recreation will enjoy a longer season, leading to a net benefit.
The net effect of historic climate change over the last century has been beneficial. The beneficial effects of climate change have outweighed the harmful effects across the planet. However, the effects have not been evenly distributed across the planet, with more benefits in the mid to high latitudes and more damage in the low latitudes. The net effect of future climate is expected to turn harmful as benefits will shrink and damages will become more pervasive. A large proportion of the damage from climate change will happen in the low latitudes, where temperatures will be the highest.
Measurements of the economic impact of climate change have changed over time. Early studies focused only on the harmful consequences of climate change. Including climate effects that are beneficial has reduced net damage. Early studies assumed no adaptation to climate change. Including adaptation has reduced the net harm from climate change. Catastrophe has been assumed to be a major motivation to do near-term mitigation. However, massive sea level rise, ecosystem collapse, and high climate sensitivity are all slow-moving phenomena that take many centuries to unfold, suggesting a modest present value.
Article
Lori DiPrete Brown
Local governance is a key focal point for achieving the United Nations Sustainable Development Goals (SDGs). National and global initiatives encourage SDG governance by promoting the overall SDG framework, targets, and indicators and by providing data, rankings, and visualization about the performance of nations, states, and selected cities. Soon after the SDGs were adopted in 2015, efforts turned toward localization—that is, a focus on local governance as the engine for progress and innovation, which engendered many efforts to develop indicators to measure sustainability. In addition to this emphasis on measurement strategies, the use of the SDGs as a holistic and integrated framework that is essential for improvement, implementation, and innovation began to emerge. Despite challenges to SDG-based local governance, promising strategies that exemplify “SDG 360 Thinking” have emerged. These approaches reflect practical insights related to political incentives, local relevance, and simplicity or feasibility. They address key aspects of the planning and implementation cycle and echo evidence-based approaches deriving from systems thinking and implementation science. SDG 360 Thinking uses a holistic systematic approach to focus on identification of co-benefits; reduction of harm, waste, and error; and equity trade-offs. The clarity of purpose, systematic approach, and revelatory power of SDG 360 Thinking, combined with a practical, inclusive, and robust economics, offer the promise to enable local governments to realize the potential of the SDGs.
Article
David Wolf and H. Allen Klaiber
The value of a differentiated product is simply the sum of its parts. This concept is easily observed in housing markets where the price of a home is determined by the underlying bundle of attributes that define it and by the price households are willing to pay for each attribute. These prices are referred to as implicit prices because their value is indirectly revealed through the price of another product (typically a home) and are of interest as they reveal the value of goods, such as nearby public amenities, that would otherwise remain unknown.
This concept was first formalized into a tractable theoretical framework by Rosen, and is known as the hedonic pricing method. The two-stage hedonic method requires the researcher to map housing attributes into housing price using an equilibrium price function. Information recovered from the first stage is then used to recover inverse demand functions for nonmarket goods in the second stage, which are required for nonmarginal welfare evaluation. Researchers have rarely implemented the second stage, however, due to limited data availability, specification concerns, and the inability to correct for simultaneity bias between price and quality. As policies increasingly seek to deliver large, nonmarginal changes in public goods, the need to estimate the hedonic second stage is becoming more poignant. Greater effort therefore needs to be made to establish a set of best practices within the second stage, many of which can be developed using methods established in the extensive first-stage literature.