1-20 of 23 Results

  • Keywords: policy x
Clear all

Article

The development of a simple framework with optimizing agents and nominal rigidities is the point of departure for the analysis of three questions about fiscal and monetary policies in an open economy. The first question concerns the optimal monetary policy targets in a world with trade and financial links. In the baseline model, the optimal cooperative monetary policy is fully inward-looking and seeks to stabilize a combination of domestic inflation and output gap. The equivalence with the closed economy case, however, ends if countries do not cooperate, if firms price goods in the currency of the market of destination, and if international financial markets are incomplete. In these cases, external variables that capture international misalignments relative to the first best become relevant policy targets. The second question is about the empirical evidence on the international transmission of government spending shocks. In response to a positive innovation, the real exchange rate depreciates and the trade balance deteriorates. Standard open economy models struggle to match this evidence. Non-standard consumption preferences and a detailed fiscal adjustment process constitute two ways to address the puzzle. The third question deals with the trade-offs associated with an active use of fiscal policy for stabilization purposes in a currency union. The optimal policy assignment mandates the monetary authority to stabilize union-wide aggregates and the national fiscal authorities to respond to country-specific shocks. Permanent changes of government debt allow to smooth the distortionary effects of volatile taxes. Clear and credible fiscal rules may be able to strike the appropriate balance between stabilization objectives and moral hazard issues.

Article

While traditional economic literature often sees nominal variables as irrelevant for the real economy, there is a vast body of analytical and empirical economic work that recognizes that, to the extent they exert a critical influence on the macroeconomic environment through a multiplicity of channels, exchange rate policies (ERP) have important consequences for development. ERP influences economic development in various ways: through its incidence on real variables such as investment and growth (and growth volatility) and on nominal aspects such relative prices or financial depth that, in turn, affect output growth or income distribution, among other development goals. Additionally, ERP, through the expected distribution of the real exchange rate indirectly, influences dimensions such as trade or financial fragility and explains, at least partially, the adoption of the euro—an extreme case of a fixed exchange rate arrangement—or the preference for floating exchange rates in the absence of financial dollarization. Importantly, exchange rate pegs have been (and, in many countries, still are) widely used as a nominal anchor to contain inflation in economies where nominal volatility induces agents to use the exchange rate as an implicit unit of account. All of these channels have been reflected to varying degrees in the choice of exchange rate regimes in recent history. The empirical literature on the consequences of ERP has been plagued by definitional and measurement problems. Whereas few economists would contest the textbook definition of canonical exchange rate regimes (fixed regimes involve a commitment to keep the nominal exchange rate at a given level; floating regimes imply no market intervention by the monetary authorities), reality is more nuanced: Pure floats are hard to find, and the empirical distinction between alternative flexible regimes is not always clear. Moreover, there are many different degrees of exchange rate commitments as well as many alternative anchors, sometimes undisclosed. Finally, it is not unusual that a country that officially declares to peg its currency realigns its parity if it finds the constraints on monetary policy or economic activity too taxing. By the same token, a country that commits to a float may choose to intervene in the foreign exchange market to dampen exchange rate fluctuations. The regime of choice depends critically on the situation of each country at a given point in time as much as on the evolution of the global environment. Because both the ERP debate and real-life choices incorporate national and time-specific aspects that tend to evolve over time, so does the changing focus of the debate. In the post-World War II years, under the Bretton Woods agreement, most countries pegged their currencies to the U.S. dollar, which in turn was kept convertible to gold. In the post-Bretton Woods years, after August 1971 when the United States abandoned unilaterally the convertibility of the dollar, thus bringing the Bretton Woods system to an end, the individual choices of ERP were intimately related to the global and local historical contexts, according to whether policy prioritized the use of the exchange rate as a nominal anchor (in favor of pegged or superfixed exchange rates, with dollarization or the launch of the euro as two extreme examples), as a tool to enhance price competitiveness (as in export-oriented developing countries like China in the 2000s) or as a countercyclical buffer (in favor of floating regimes with limited intervention, the prevalent view in the developed world). Similarly, the declining degree of financial dollarization, combined with the improved quality of monetary institutions, explain the growing popularity of inflation targeting with floating exchange rates in emerging economies. Finally, a prudential leaning-against-the-wind intervention to counter mean reverting global financial cycles and exchange rate swings motivates a more active—and increasingly mainstream—ERP in the late 2000s. The fact that most medium and large developing economies (and virtually all industrial ones) revealed in the 2000s a preference for exchange rate flexibility simply reflects this evolution. Is the combination of inflation targeting (IT) and countercyclical exchange rate intervention a new paradigm? It is still too early to judge. On the one hand, pegs still represent more than half of the IMF reporting countries—particularly, small ones—indicating that exchange rate anchors are still favored by small open economies that give priority to the trade dividend of stable exchange rates and find the conduct of an autonomous monetary policy too costly, due to lack of human capital, scale, or an important non-tradable sector. On the other hand, the work and the empirical evidence on the subject, particularly after the recession of 2008–2009, highlight a number of developments in the way advanced and emerging economies think of the impossible trinity that, in a context of deepening financial integration, casts doubt on the IT paradigm, places the dilemma between nominal and real stability back on the forefront, and postulates an IT 2.0, which includes selective exchange rate interventions as a workable compromise. At any rate, the exchange rate debate is still alive and open.

Article

Sushant Acharya and Paolo Pesenti

Global policy spillovers can be defined as the effect of policy changes in one country on economic outcomes in other countries. The literature has mainly focused on monetary policy interdependencies and has identified three channels through which policy spillovers can materialize. The first is the expenditure-shifting channel—a monetary expansion in one country depreciates its currency, making its goods cheaper relative to those in other countries and shifting global demand toward domestic tradable goods. The second is the expenditure-changing channel—expansionary monetary policy in one country raises both domestic and foreign expenditure. The third is the financial spillovers channel—expansionary monetary policy in one country eases financial conditions in other economies. The literature generally finds that the net transmission effect is positive but small. However, estimated spillovers vary widely across countries and over time. In the aftermath of the Great Recession, the policy debate has devoted special attention to the possibility that the magnitude and sign of international spillovers might have changed in an environment of low interest rates worldwide, as the expenditure-shifting channel becomes more relevant when the effective lower bound reduces the effectiveness of conventional monetary policies.

Article

Trade policy is one determining factor of 19th-century globalization, alongside transport and communication innovations and broader institutional changes that made worldwide commodity and factor flows possible. Four broad periods, or trade policy regimes, can be discerned at the European level. The first starts at the end of the French Revolutionary and Napoleonic wars that had led to many disruptions in trade relations. Governments tried to recover from the financial impact of the wars and to mitigate the adjustment shocks to domestic producers that came with the end of the wars. Very restrictive trade policies were thus adopted in most places and only slowly dismantled over the following decades as some of the welfare costs of, for example, agricultural protection became evident. The second period dated from the mid-1840s, which saw the liberalization of protective grain tariffs in many European countries, to the mid-1870s, when trade liberalization reached its maximum. This period witnessed unilateral trade liberalizations, but is most famous for the spread of a network of bilateral trade agreements across Europe in the wake of the Cobden–Chevalier treaty between France and the United Kingdom in 1860. From the 1870s, industrial and commercial crises and falling prices in agriculture due to global market integration led governments to search for solutions to these policy challenges. Many European countries thus increased protection for agriculture and manufactured goods in which domestic import-competing producers struggled. At the same time, demands for renegotiations threatened the treaty network, and lapsing agreements were only provisionally prolonged. From the late 1880s, the struggle between protection for import-competing producers and market access abroad for export-oriented producers led to internal and external conflicts over trade policy in many countries, including trade (or tariff) “wars.” A renewed network of less ambitious trade treaties than those of the 1860s restored a fragile equilibrium from the early 1890s, to be renewed and renegotiated roughly every 12 years as treaties approached their expiration date. When looking at the country and commodity level it can easily be appreciated that the more or less common shifts during these periods at the European level were more pronounced in some countries than in others. For example, the United Kingdom, the Netherlands, Switzerland, and Belgium shifted more decisively to free trade and remained there, while liberalization was much less pronounced and more decisively undone in Portugal, Spain, Russia, and the Habsburg monarchy. The experiences of the Scandinavian countries, Germany, and France lie somewhere in between. Turkey and the countries that gained independence from the Ottoman Empire in the 19th century started as (forced) free traders and from the 1880s increased their duties, in part to meet growing fiscal demands. At the commodity level, tariffs on raw materials remained generally low and did not follow the protectionist backlash that affected foodstuffs. One exception was (initially) “tropical” goods such as sugar, coffee, tea, and tobacco, where many countries levied high tariffs to extract fiscal revenue. For manufactured goods, liberalization and protectionist backlash were milder than in agriculture, although there are many exceptions to this rule.

Article

Integrated assessment models (IAMs) of the climate and economy aim to analyze the impact and efficacy of policies that aim to control climate change, such as carbon taxes and subsidies. A major characteristic of IAMs is that their geophysical sector determines the mean surface temperature increase over the preindustrial level, which in turn determines the damage function. Most of the existing IAMs assume that all of the future information is known. However, there are significant uncertainties in the climate and economic system, including parameter uncertainty, model uncertainty, climate tipping risks, and economic risks. For example, climate sensitivity, a well-known parameter that measures how much the equilibrium temperature will change if the atmospheric carbon concentration doubles, can range from below 1 to more than 10 in the literature. Climate damages are also uncertain. Some researchers assume that climate damages are proportional to instantaneous output, while others assume that climate damages have a more persistent impact on economic growth. The spatial distribution of climate damages is also uncertain. Climate tipping risks represent (nearly) irreversible climate events that may lead to significant changes in the climate system, such as the Greenland ice sheet collapse, while the conditions, probability of tipping, duration, and associated damage are also uncertain. Technological progress in carbon capture and storage, adaptation, renewable energy, and energy efficiency are uncertain as well. Future international cooperation and implementation of international agreements in controlling climate change may vary over time, possibly due to economic risks, natural disasters, or social conflict. In the face of these uncertainties, policy makers have to provide a decision that considers important factors such as risk aversion, inequality aversion, and sustainability of the economy and ecosystem. Solving this problem may require richer and more realistic models than standard IAMs and advanced computational methods. The recent literature has shown that these uncertainties can be incorporated into IAMs and may change optimal climate policies significantly.

Article

The house price boom that has been present in most Chinese cities since the early 2000s has triggered substantial interest in the role that China’s housing policy plays in its housing market and macroeconomy, with an extensive literature employing both empirical and theoretical perspectives developed over the past decade. This research finds that the privatization of China’s housing market, which encouraged households living in state-owned housing to purchase their homes at prices far below their market value, contributed to a rapid increase in homeownership beginning in the mid-1990s. Housing market privatization also has led to a significant increase in both housing and nonhousing consumption, but these benefits are unevenly distributed across households. With the policy goal of making homeownership affordable for the average household, the Housing Provident Fund contributes positively to homeownership rates. By contrast, the effectiveness of housing policies to make housing affordable for low-income households has been weaker in recent years. Moreover, a large body of empirical research shows that the unintended consequences of housing market privatization have been a persistent increase in housing prices since the early 2000s, which has been accompanied by soaring land prices, high vacancy rates, and high price-to-income and price-to-rent ratios. The literature has differing views regarding the sustainability of China’s housing boom. On a theoretical front, economists find that rising housing demand, due to both consumption and investment purposes, is important to understanding China’s prolonged housing boom, and that land-use policy, which influences the supply side of the housing market, lies at the center of China’s housing boom. However, regulatory policies, such as housing purchase restrictions and property taxes, have had mixed effects on the housing market in different cities. In addition to China’s housing policy and its direct effects on the nation’s housing market, research finds that China’s housing policy impacts its macroeconomy via the transmission of house price dynamics into the household and corporate sectors. High housing prices have a heterogenous impact on the consumption and savings of different types of households but tend to discourage household labor supply. Meanwhile, rising house prices encourage housing investment by non–real-estate firms, which crowds out nonhousing investment, lowers the availability of noncollateralized business loans, and reduces productive efficiency via the misallocation of capital and managerial talent.

Article

Chao Gu, Han Han, and Randall Wright

This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.

Article

Chao Gu, Han Han, and Randall Wright

The effects of news (i.e., information innovations) are studied in dynamic general equilibrium models where liquidity matters. As a leading example, news can be announcements about monetary policy directions. In three standard theoretical environments—an overlapping generations model of fiat currency, a new monetarist model accommodating multiple payment methods, and a model of unsecured credit—transition paths are constructed between an announcement and the date at which events are realized. Although the economics is different, in each case, news about monetary policy can induce volatility in financial and other markets, with transitions displaying booms, crashes, and cycles in prices, quantities, and welfare. This is not the same as volatility based on self-fulfilling prophecies (e.g., cyclic or sunspot equilibria) studied elsewhere. Instead, the focus is on the unique equilibrium that is stationary when parameters are constant but still delivers complicated dynamics in simple environments due to information and liquidity effects. This is true even for classically-neutral policy changes. The induced volatility can be bad or good for welfare, but using policy to exploit this in practice seems difficult because outcomes are very sensitive to timing and parameters. The approach can be extended to include news of real factors, as seen in examples.

Article

Daniel Eisenberg and Ramesh Raghavan

One of the most important unanswered questions for any society is how best to invest in children’s mental health. Childhood is a sensitive and opportune period in which to invest in programs and services that can mitigate a range of downstream risks for health and mental health conditions. Investing in such programs and services will require a shift from focusing not only on reducing deficits but also enhancing the child’s skills and other assets. Economic evaluation is crucial for determining which programs and services represent optimal investments. Several registries curate lists of programs with high evidence of effectiveness; many of these programs also have evidence of positive benefit-cost differentials, although the economic evidence is typically limited and uncertain. Even the programs with the strongest evidence are currently reaching only a small fraction of young people who would potentially benefit. Thus, it is important to understand and address factors that impede or facilitate the implementation of best practices. One example of a program that represents a promising investment is home visiting, in which health workers visit the homes of new parents to advise on parenting skills, child needs, and the home environment. Another example is social emotional learning programs delivered in schools, where children are taught to regulate emotions, manage behaviors, and enhance relationships with peers. Investing in these and other programs with a strong evidence base, and assuring their faithful implementation in practice settings, can produce improvements on a range of mental health, academic, and social outcomes for children, extending into their lives as adults.

Article

Alessandro Rebucci and Chang Ma

This paper reviews selected post–Global Financial Crisis theoretical and empirical contributions on capital controls and identifies three theoretical motives for the use of capital controls: pecuniary externalities in models of financial crises, aggregate demand externalities in New Keynesian models of the business cycle, and terms of trade manipulation in open-economy models with pricing power. Pecuniary and demand externalities offer the most compelling case for the adoption of capital controls, but macroprudential policy can also address the same distortions. So capital controls generally are not the only instrument that can do the job. If evaluated through the lenses of the new theories, the empirical evidence reviewed suggests that capital controls can have the intended effects, even though the extant literature is inconclusive as to whether the effects documented amount to a net gain or loss in welfare terms. Terms of trade manipulation also provides a clear-cut theoretical case for the use of capital controls, but this motive is less compelling because of the spillover and coordination issues inherent in the use of control on capital flows for this purpose. Perhaps not surprisingly, only a handful of countries have used capital controls in a countercyclical manner, while many adopted macroprudential policies. This suggests that capital control policy might entail additional costs other than increased financing costs, such as signaling the bad quality of future policies, leakages, and spillovers.

Article

Michael Drummond, Rosanna Tarricone, and Aleksandra Torbica

There are a number of challenges in the economic evaluation of medical devices (MDs). They are typically less regulated than pharmaceuticals, and the clinical evidence requirements for market authorization are generally lower. There are also specific characteristics of MDs, such as the device–user interaction (learning curve), the incremental nature of innovation, the dynamic nature of pricing, and the broader organizational impact. Therefore, a number of initiatives need to be taken in order to facilitate the economic evaluation of MDs. First, the regulatory processes for MDs need to be strengthened and more closely aligned to the needs of economic evaluation. Second, the methods of economic evaluation need to be enhanced by improving the analysis of the available clinical data, establishing high-quality clinical registries, and better recognizing MDs’ specific characteristics. Third, the market entry and diffusion of MDs need to be better managed by understanding the key influences on MD diffusion and linking diffusion with cost-effectiveness evidence through the use of performance-based risk-sharing arrangements.

Article

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.

Article

Mostafa Beshkar and Eric Bond

International trade agreements have played a significant role in the reduction of trade barriers that has taken place since the end of World War II. One objective of the theoretical literature on trade agreements is to address the question of why bilateral and multilateral trade agreements, rather than simple unilateral actions by individual countries, have been required to reduce trade barriers. The predominant explanation has been the terms of trade theory, which argues that unilateral tariff policies lead to a prisoner’s dilemma due to the negative effect of a country’s tariffs on its trading partners. Reciprocal tariff reductions through a trade agreement are required to obtain tariff reductions that improve on the noncooperative equilibrium. An alternative explanation, the commitment theory of trade agreements, focuses on the use of external enforcement under a trade agreement to discipline domestic politics. A second objective of the theoretical literature has been to understand the design of trade agreements. Insights from contract theory are used to study various flexibility mechanisms that are embodied in trade agreements. These mechanisms include contingent protection measures such as safeguards and antidumping, and unilateral flexibility through tariff overhang. The literature also addresses the enforcement of agreements in the absence of an external enforcement mechanism. The theories of the dispute settlement process of the WTO portray it as an institution with an informational role that facilitates the coordination among parties with incomplete information about the states of the world and the nature of the actions taken by each signatory. Finally, the literature examines whether the ability to form preferential trade agreements serves as a stumbling block or a building block to multilateral liberalization.

Article

Financial protection is claimed to be an important objective of health policy. Yet there is a lack of clarity about what it is and no consensus on how to measure it. This impedes the design of efficient and equitable health financing. Arguably, the objective of financial protection is to shield nonmedical consumption from the cost of healthcare. The instruments are formal health insurance and public finances, as well as informal and self-insurance mechanisms that do not impair earnings potential. There are four main approaches to the measurement of financial protection: the extent of consumption smoothing over health shocks, the risk premium (willingness to pay in excess of a fair premium) to cover uninsured medical expenses, catastrophic healthcare payments, and impoverishing healthcare payments. The first of these does not restrict attention to medical expenses, which limits its relevance to health financing policy. The second rests on assumptions about risk preferences. No measure treats medical expenses that are financed through informal insurance and self-insurance instruments in an entirely satisfactory way. By ignoring these sources of imperfect insurance, the catastrophic payments measure overstates the impact of out-of-pocket medical expenses on living standards, while the impoverishment measure does not credibly identify poverty caused by them. It is better thought of as a correction to the measurement of poverty.

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

While economists overwhelmingly favor free trade, even unilateral free trade, because of the gains realizable from specialization and the exploitation of comparative advantage, in fact international trading relations are structured by a complex body of multilateral and preferential trade agreements. The article outlines the case for multilateral trade agreements and the non-discrimination principle that they embody, in the form of both the Most Favored Nation principle and the National Treatment principle, where non-discrimination has been widely advocated as supporting both geopolitical goals (reducing economic factionalism) and economic goals (ensuring the full play of theories of comparative advantage undistorted by discriminatory trade treatment). Despite the virtues of multilateral trade agreements, preferential trade agreements (PTAs), authorized from the outset under GATT, have proliferated in recent years, even though they are inherently discriminatory between members and non-members, provoking vigorous debates as to whether (a) PTAs are trade-creating or trade-diverting; (b) whether they increase transaction costs in international trade; and (c) whether they undermine the future course of multilateral trade liberalization. A further and similarly contentious derogation from the principle of non-discrimination under the multilateral system is Special and Differential Treatment for developing countries, where since the mid-1950s developing countries have been given much greater latitude than developed countries to engage in trade protectionism on the import side in order to promote infant industries, and since the mid-1960s on the export side have benefited from non-reciprocal trade concessions by developed countries on products of actual or potential export interest to developing countries. Beyond debates over the strengths and weaknesses of multilateral trade agreements and the two major derogations therefrom, further debates surround the appropriate scope of trade agreements, and in particular the expansion of their scope in recent decades to address divergences or incompatibilities across a wide range of domestic regulatory and related policies that arguably create frictions in cross-border trade and investment and hence constitute an impediment to it. The article goes on to consider contemporary fair trade versus free trade debates, including concerns over trade deficits, currency manipulation, export subsidies, misappropriation of intellectual property rights, and lax labor or environmental standards. The article concludes with a consideration of the case for a larger scope for plurilateral trade agreements internationally, and for a larger scope for active labor market policies domestically to mitigate transition costs from trade.

Article

“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory. The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.

Article

To guide climate change policymaking, we need to understand how technologies and behaviors should be transformed to avoid dangerous levels of global warming and what the implications of failing to bring forward such transformation might be. Integrated assessment models (IAMs) are computational tools developed by engineers, earth and natural scientists, and economists to provide projections of interconnected human and natural systems under various conditions. These models help researchers to understand possible implications of climate inaction. They evaluate the effects of national and international policies on global emissions and devise optimal emissions trajectories in line with long-term temperature targets and their implications for infrastructure, investment, and behavior. This research highlights the deep interconnection between climate policies and other sustainable development objectives. Evolving and focusing on one or more of these key policy questions, the large family of IAMs includes a wide array of tools that incorporate multiple dimensions and advances from a range of scientific fields.

Article

The rise in obesity and other food-related chronic diseases has prompted public-health officials of local communities, national governments, and international institutions to pay attention to the regulation of food supply and consumer behavior. A wide range of policy interventions has been proposed and tested since the early 21st century in various countries. The most prominent are food taxation, health education, nutritional labeling, behavioral interventions at point-of-decision, advertising, and regulations of food quality and trade. While the standard neoclassical approach to consumer rationality provides limited arguments in favor of public regulations, the recent development of behavioral economics research extends the scope of regulation to many marketing practices of the food industry. In addition, behavioral economics provides arguments in favor of taxation, easy-to-use front-of-pack labels, and the use of nudges for altering consumer choices. A selective but careful review of the empirical literature on taxation, labeling, and nudges suggests that a policy mixing these tools may produce some health benefits. More specifically, soft-drink taxation, front-of-pack labeling policies, regulations of marketing practices, and eating nudges based on affect or behavior manipulations are often effective methods for reducing unhealthy eating. The economic research faces important challenges. First, the lack of a proper control group and exogenous sources of variations in policy variables make evaluation very difficult. Identification is challenging as well, with data covering short time periods over which markets are observed around slowly moving equilibria. In addition, truly exogenous supply or demand shocks are rare events. Second, structural models of consumer choices cannot provide accurate assessment of the welfare benefits of public policies because they consider perfectly rational agents and often ignore the dynamic aspects of food decisions, especially consumer concerns over health. Being able to obtain better welfare evaluation of policies is a priority. Third, there is a lack of research on the food industry response to public policies. Some studies implement empirical industrial organization models to infer the industry strategic reactions from market data. A fruitful avenue is to extend this approach to analyze other key dimensions of industrial strategies, especially decisions regarding the nutritional quality of food. Finally, the implementation of nutritional policies yields systemic consequences that may be underestimated. They give rise to conflicts between public health and trade objectives and alter the business models of the food sector. This may greatly limit the external validity of ex-ante empirical approaches. Future works may benefit from household-, firm-, and product-level data collected in rapidly developing economies where food markets are characterized by rapid transitions, the supply is often more volatile, and exogenous shocks occur more frequently.

Article

George W. Evans and Bruce McGough

While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria. AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making. Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots. The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward. In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.