You are looking at 121-140 of 204 articles
Hendrik Schmitz and Svenja Winkler
The terms information and risk aversion play central roles in healthcare economics. While risk aversion is among the main reasons for the existence of health insurance, information asymmetries between insured individual and insurance company potentially lead to moral hazard or adverse selection. This has implications for the optimal design of health insurance contracts, but whether there is indeed moral hazard or adverse selection are ultimately empirical questions. Recently, there was even a debate whether the opposite of adverse selection—advantageous selection—prevails. Private information on risk aversion might weigh out information asymmetries regarding risk type and lead to more insurance coverage of healthy individuals (instead of less insurance coverage in adverse selection).
Information and risk preferences are important not only in health insurance but more generally in health economics. For instance, they affect health behavior and, consequently, health outcomes. The degree of risk aversion, the ability to perceive risks, and the availability of information about risks partly explain why some individuals engage in unhealthy behavior while others refrain from smoking, drinking, or the like.
Information has several dimensions. Apart from information on one’s personal health status, risk preferences, or health risks, consumer information on provider quality or health insurance supply is central in the economics of healthcare. Even though healthcare systems are necessarily highly regulated throughout the world, all systems at least allow for some market elements. These typically include the possibility of consumer choice, for instance, regarding health insurance coverage or choice of medical provider. An important question is whether consumer choice elements work in the healthcare sector—that is, whether consumers actually make rational or optimal decisions—and whether more information can improve decision quality.
Stephen F. Diamond
Insider trading is not widely understood. Insiders of corporations can, in fact, buy and sell shares of those corporations. But, over time, Congress, the courts and the Securities and Exchange Commission (SEC) have imposed significant limits on such trading. The limits are not always clearly marked and the principles underlying them not always consistent. The core principle is that it is illegal to trade if one is in the possession of material, nonpublic information. But the rationality of this principle has been challenged by successive generations of law and economics scholars, most notably Manne, Easterbrook, Epstein, and Bainbridge. Their “economic” analysis of this contested area of the law provides, arguably, at least a more consistent basis upon which to decide when trades by insiders should, in fact, be disallowed. A return to genuine “first principles” generated by the nature of capitalism, however, allows for more powerful insights into the phenomenon and could lead to more effective regulation.
In contrast with the existing cross-country literature on institutions and development the overview in this article focuses instead on case studies of institutions at the disaggregated level that help or hinder productivity growth. It also shows how along with rule-based systems institutional systems based on social relations and networks and community organizations can resolve some issues of collective action in development. At the level of the state, our discussion focuses on incentive issues in the internal organization of government and how the nature of accountability structures at different levels of government can help or hinder development. In view of the breadth of the relevant literature we have deliberately confined ourselves to the available empirical case studies in only the two largest developing countries, China and India.
Eduardo A. Cavallo
Sudden stops in capital flows are a form of financial whiplash that creates instability and crises in the affected economies. Sudden stops in net capital flows trigger current account reversals as countries that were borrowing on net from the rest of the world before the stop can no longer finance current account deficits. Sudden stops in gross capital flows are associated with financial instability, especially when the gross flows are dominated by volatile cross-border banking flows. Sudden stops in gross and net capital flows are episodes with an external trigger. This implies that the spark that ignites sudden stops originates outside the affected country: more specifically, in the supply of foreign financing that can halt for reasons that may be unrelated to the affected country’s domestic conditions. Yet a spark cannot generate a fire unless combustible materials are around. The literature has established that a set of domestic macroeconomic fundamentals are the combustible materials that make some countries more vulnerable than others. Higher fiscal deficits, larger current account deficits, and higher levels of foreign currency debts in the domestic financial system are manifestations of weak fundamentals that increase vulnerability. Those same factors increase the costs in terms of output losses when the crisis materializes. On the flip side, international reserves provide buffers that can help countries offset the risks. Holding foreign currency reserves hedges the fiscal position of the government providing it with more resources to respond to the crisis. While it may be impossible for countries to completely insulate themselves from the volatility of capital inflows, the choice of antidotes to prevent that volatility from forcing potentially costly external adjustments is in their own hands. The global financial architecture can be improved to support those efforts if countries could agree on and fund a more powerful international lender of last resort that resembles, at the global scale, the role of the Federal Reserve Bank in promoting financial stability in the United States.
Ehsan U. Choudhri
Exchange rates often display sudden and large changes. There is considerable interest in examining how these changes affect prices, especially import and consumer prices. Exchange rate pass-through measures the responsiveness of the price of a basket of goods to changes in the exchange rate and is defined as the elasticity of the price of the basket (expressed in home currency) with respect to the exchange rate (defined as the price of foreign currency). The pass-through estimates vary across product groups, countries, and time periods, but a general finding is that pass-through tends to be significantly less than one, which implies that prices do not fully respond to a foreign currency appreciation. Pass-through to export prices tends to be smaller than pass-through to import prices. Pass-through to consumer prices is lower than both import and export price pass-through and is generally very small. One explanation of pass-through evidence focuses on the role of nominal rigidities (infrequent changes in prices set in home or foreign currency). Another explanation emphasizes the importance of markup variation in response to exchange rate changes.
In models with nominal rigidities, one important issue is whether exporting firms set prices in their country’s currency (producer’s currency) or importing country’s currency (consumer’s currency). If prices are sticky in producer’s currency, flexible exchange rates are preferable as they allow for desirable relative price adjustment. On the other hand, if prices are sticky in consumer’s currency, exchange rate flexibility is not as helpful in adjusting prices and fixed exchange rates are superior. The standard model where markup is constant and all firms (at home and abroad) use either producer or consumer currency pricing is not consistent with typical estimates of pass-through to import and export prices. To explain this evidence, the standard model needs to be modified to allow for variable markup and/or a hybrid model of currency choice where some firms set prices in producer’s and others in consumer’s currency. In the case of the hybrid model, the welfare difference between fixed and flexible exchange rates is not as stark as in the pure cases of currency choice and is likely to be small. Another issue of much interest is whether inflationary environment can affect pass-through, especially to consumer prices. Inflationary environment can influence pass-through to import and consumer prices through several channels, such as persistence of costs and frequency of price change. Empirical evidence shows that pass-through to consumer prices is related to the level and variability of inflation across countries and time periods and is lower in an environment with low and stable inflation. This evidence suggests that a monetary policy regime that targets low inflation will produce a low pass-through environment, which would dampen the price effects of exchange rate changes.
Long-distance international trade for hundreds of years stemmed primarily from differences in climate. Generally free-trade policy and reduced transport cost superimposed another pattern by 1914; one of greater international specialization based upon land and labor abundance or scarcity. The broadly open trading world of the beginning of 1914 broke down first under the impact of war and then of the Great Depression. By 1945 the United States had emerged as the most powerful nation, committed to establishing a world order that would not make the mistakes of the preceding decades. The promotion of more liberalized trade among the wealthier nations, over the following decades hugely expanded the volume of trade. Trade in manufactures—based on skill endowments and preference diversity—came to dominate that in primary product. Services strongly increased in importance, especially with the rise of e-commerce. Oil displaced coal as the world’s principal fuel, redistributing income to those countries with substantial oil deposits. The greatest threat to the continuing expansion of world incomes and trade came from the Great Recession of 2008–2009, but the World Trade Organization regime discouraged the mutually destructive trade wars of the earlier period. However, the WTO was less successful 10 years later in restraining the damaging United States–China trade conflict.
The links of international reserves, exchange rates, and monetary policy can be understood through the lens of a modern incarnation of the “impossible trinity” (aka the “trilemma”), based on Mundell and Fleming’s hypothesis that a country may simultaneously choose any two, but not all, of the following three policy goals: monetary independence, exchange rate stability, and financial integration. The original economic trilemma was framed in the 1960s, during the Bretton Woods regime, as a binary choice of two out of the possible three policy goals. However, in the 1990s and 2000s, emerging markets and developing countries found that deeper financial integration comes with growing exposure to financial instability and the increased risk of “sudden stop” of capital inflows and capital flight crises. These crises have been characterized by exchange rate instability triggered by countries’ balance sheet exposure to external hard currency debt—exposures that have propagated banking instabilities and crises. Such events have frequently morphed into deep internal and external debt crises, ending with bailouts of systemic banks and powerful macro players. The resultant domestic debt overhang led to fiscal dominance and a reduction of the scope of monetary policy. With varying lags, these crises induced economic and political changes, in which a growing share of emerging markets and developing countries converged to “in-between” regimes in the trilemma middle range—that is, managed exchange rate flexibility, controlled financial integration, and limited but viable monetary autonomy. Emerging research has validated a modern version of the trilemma: that is, countries face a continuous trilemma trade-off in which a higher trilemma policy goal is “traded off” with a drop in the weighted average of the other two trilemma policy goals. The concerns associated with exposure to financial instability have been addressed by varying configurations of managing public buffers (international reserves, sovereign wealth funds), as well as growing application of macro-prudential measures aimed at inducing systemic players to internalize the impact of their balance sheet exposure on a country’s financial stability. Consequently, the original trilemma has morphed into a quadrilemma, wherein financial stability has been added to the trilemma’s original policy goals. Size does matter, and there is no way for smaller countries to insulate themselves fully from exposure to global cycles and shocks. Yet successful navigation of the open-economy quadrilemma helps in reducing the transmission of external shock to the domestic economy, as well as the costs of domestic shocks. These observations explain the relative resilience of emerging markets—especially in countries with more mature institutions—as they have been buffered by deeper precautionary management of reserves, and greater fiscal and monetary space.
We close the discussion noting that the global financial crisis, and the subsequent Eurozone crisis, have shown that no country is immune from exposure to financial instability and from the modern quadrilemma. However, countries with mature institutions, deeper fiscal capabilities, and more fiscal space may substitute the reliance on costly precautionary buffers with bilateral swap lines coordinated among their central banks. While the benefits of such arrangements are clear, they may hinge on the presence and credibility of their fiscal backstop mechanisms, and on curbing the resultant moral hazard. Time will test this credibility, and the degree to which risk-pooling arrangements can be extended to cover the growing share of emerging markets and developing countries.
Daniel Eisenberg and Ramesh Raghavan
One of the most important unanswered questions for any society is how best to invest in children’s mental health. Childhood is a sensitive and opportune period in which to invest in programs and services that can mitigate a range of downstream risks for health and mental health conditions. Investing in such programs and services will require a shift from focusing not only on reducing deficits but also enhancing the child’s skills and other assets. Economic evaluation is crucial for determining which programs and services represent optimal investments. Several registries curate lists of programs with high evidence of effectiveness; many of these programs also have evidence of positive benefit-cost differentials, although the economic evidence is typically limited and uncertain. Even the programs with the strongest evidence are currently reaching only a small fraction of young people who would potentially benefit. Thus, it is important to understand and address factors that impede or facilitate the implementation of best practices. One example of a program that represents a promising investment is home visiting, in which health workers visit the homes of new parents to advise on parenting skills, child needs, and the home environment. Another example is social emotional learning programs delivered in schools, where children are taught to regulate emotions, manage behaviors, and enhance relationships with peers. Investing in these and other programs with a strong evidence base, and assuring their faithful implementation in practice settings, can produce improvements on a range of mental health, academic, and social outcomes for children, extending into their lives as adults.
Joni Hersch and Blair Druhan Bullock
The labor market is governed by a panoply of laws, regulating virtually all aspects of the employment relation, including hiring, firing, information exchange, privacy, workplace safety, work hours, minimum wages, and access to courts for redress of violations of rights. Antidiscrimination laws, especially Title VII, notably prohibit employment discrimination on the basis of race, color, religion, sex, and national origin. Court decisions and legislation have led to the extension of protection to a far wider range of classes and types of workplace behavior than Title VII originally covered.
The workplace of the early 21st century is very different from the workplace when the major employment discrimination statutes were enacted, as these laws were conceived as regulating an employer–employee relationship in a predominantly white male labor market. Prior emphasis on employment discrimination on the basis of race and sex has been superseded by enhanced attention to sexual harassment and discrimination on the basis of disability, sexual orientation, gender identity, and religion. Concerns over the equity or efficiency of the employment-at-will doctrine recede in a workforce in which workers are increasingly categorized as independent contractors who are not covered by most equal employment laws. As the workplace has changed, the scholarship on the law and economics of employment law has been slow to follow.
While economists overwhelmingly favor free trade, even unilateral free trade, because of the gains realizable from specialization and the exploitation of comparative advantage, in fact international trading relations are structured by a complex body of multilateral and preferential trade agreements. The article outlines the case for multilateral trade agreements and the non-discrimination principle that they embody, in the form of both the Most Favored Nation principle and the National Treatment principle, where non-discrimination has been widely advocated as supporting both geopolitical goals (reducing economic factionalism) and economic goals (ensuring the full play of theories of comparative advantage undistorted by discriminatory trade treatment).
Despite the virtues of multilateral trade agreements, preferential trade agreements (PTAs), authorized from the outset under GATT, have proliferated in recent years, even though they are inherently discriminatory between members and non-members, provoking vigorous debates as to whether (a) PTAs are trade-creating or trade-diverting; (b) whether they increase transaction costs in international trade; and (c) whether they undermine the future course of multilateral trade liberalization.
A further and similarly contentious derogation from the principle of non-discrimination under the multilateral system is Special and Differential Treatment for developing countries, where since the mid-1950s developing countries have been given much greater latitude than developed countries to engage in trade protectionism on the import side in order to promote infant industries, and since the mid-1960s on the export side have benefited from non-reciprocal trade concessions by developed countries on products of actual or potential export interest to developing countries.
Beyond debates over the strengths and weaknesses of multilateral trade agreements and the two major derogations therefrom, further debates surround the appropriate scope of trade agreements, and in particular the expansion of their scope in recent decades to address divergences or incompatibilities across a wide range of domestic regulatory and related policies that arguably create frictions in cross-border trade and investment and hence constitute an impediment to it.
The article goes on to consider contemporary fair trade versus free trade debates, including concerns over trade deficits, currency manipulation, export subsidies, misappropriation of intellectual property rights, and lax labor or environmental standards. The article concludes with a consideration of the case for a larger scope for plurilateral trade agreements internationally, and for a larger scope for active labor market policies domestically to mitigate transition costs from trade.
Law and economics is an important, growing field of specialization for both legal scholars and economists. It applies efficiency analysis to property, contracts, torts, procedure, and many other areas of the law. The use of economics as a methodology for understanding law is not immune to criticism. The rationality assumption and the efficiency principle have been intensively debated. Overall, the field has advanced in recent years by incorporating insights from psychology and other social sciences. In that respect, many questions concerning the efficiency of legal rules and norms are still open and respond to a multifaceted balance among diverse costs and benefits. The role of courts in explaining economic performance is a more specific area of analysis that emerged in the late 1990s. The relationship between law and economic growth is complex and debatable. An important literature has pointed to significant differences at the macro-level between the Anglo-American common law family and the civil law families. Although these initial results have been heavily scrutinized, other important subjects have surfaced such as convergence of legal systems, transplants, infrastructure of legal systems, rule of law and development, among others.
C. Knick Harley
The highly integrated world economy at the outbreak of World War I emerged from discoveries and technological change in previous centuries. Territories unknown to the economy of Eurasia offered profitable opportunities if capital and labor could be mobilized to cheaply produce products that could bear the high cost of transportation that prevailed before industrialization. In the 16th century, American monetary metals mined using European technology and local labor, and sold worldwide, had major repercussions, including increasing trade between Europe and Asia. From the mid-17th century, sugar and tobacco in the Americas, developed on the backs of imported African slaves, produced an Atlantic economy that included the mainland colonies of British America. In the 19th century, technological innovation became the main driving force. First, it cheapened textile production in Britain and creating a massive demand for raw cotton. Then technology radically reduced the cost of transportation on both land and sea. Lower transportation costs spurred greater international specialization and, equally importantly, brought frontiers in continental interiors into the world economy. During the later 19th century, commercial and financial institutions arose that supported increased global economic integration.
Traditional historiography has overestimated the significance of long-distance trade in the medieval economy. However, it could be argued that, because of its dynamic nature, long-distance trade played a more important role in economic development than its relative size would suggest. The term commercial revolution was introduced in the 1950s to refer to the rapid growth of European trade from about the 10th century. Long-distance trade then expanded, with the commercial integration of the two economic poles in the Mediterranean and in Flanders and the contiguous areas. It has been quantitatively shown that the integration of European markets began in the late medieval period, with rapid advancement beginning in the 16th century.
The expansion of medieval trade has been attributed to advanced business techniques, such as the appearance of new forms of partnerships and novel financial and insurance systems. Many economic historians have also emphasized merchants’ relations, especially the establishment of networks to organize trade. More recently, major contributions to institutional economic history have focused on various economic institutions that reduced the uncertainties inherent in premodern economies.
The early reputation-based institutions identified in the literature, such as the systems of the Maghribis in the Mediterranean, Champagne fairs in France, and the Italian city-states, were not optimal for changing conditions that accompanied expansion of trade, as the number of merchants increased and the relations among them became more anonymous, as generally happened during the Middle Ages. An intercommunal conciliation mechanism evolved in medieval northern Europe that supported trade among a large number of distant communities. This institution encouraged merchants to travel to distant towns and establish relations, even with persons they did not already know.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Dimitris Korobilis and Davide Pettenuzzo
Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution.
In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines.
While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.
Noémi Kreif and Karla DiazOrdaz
While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions).
This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates.
Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.
Charles Ka Yui Leung and Cho Yiu Joe Ng
This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.
While it is a long-standing idea in international macroeconomic theory that flexible nominal exchange rates have the potential to facilitate adjustment in international relative prices, a monetary union necessarily forgoes this mechanism for facilitating macroeconomic adjustment among its regions. Twenty years of experience in the eurozone monetary union, including the eurozone crisis, have spurred new macroeconomic research on the costs of giving up nominal exchange rates as a tool of adjustment, and the possibility of alternative policies to promote macroeconomic adjustment. Empirical evidence paints a mixed picture regarding the usefulness of nominal exchange rate flexibility: In many historical settings, flexible nominal exchanges rates tend to create more relative price distortions than they have helped resolve; yet, in some contexts exchange rate devaluations can serve as a useful correction to severe relative price misalignments.
Theoretical advances in studying open economy models either support the usefulness of exchange rate movements or find them irrelevant, depending on the specific characteristics of the model economy, including the particular specification of nominal rigidities, international openness in goods markets, and international financial integration. Yet in models that embody certain key aspects of the countries suffering the brunt of the eurozone crisis, such as over-borrowing and persistently high wages, it is found that nominal devaluation can be useful to prevent the type of excessive rise in unemployment observed.
This theoretical research also raises alternative polices and mechanisms to substitute for nominal exchange rate adjustment. These policies include the standard fiscal tools of optimal currency area theory but also extend to a broader set of tools including import tariffs, export subsidies, and prudential taxes on capital flows. Certain combinations of these policies, labeled a “fiscal devaluation,” have been found in theory to replicate the effects of a currency devaluation in the context of a monetary union such as the eurozone. These theoretical developments are helpful for understanding the history of experiences in the eurozone, such as the eurozone crisis. They are also helpful for thinking about options for preventing such crises in the future.
Marriage and labor market outcomes are deeply related, particularly for women. A large literature finds that the labor supply decisions of married women respond to their husbands’ employment status, wages, and job characteristics. There is also evidence that the effects of spouse characteristics on labor market outcomes operate not just through standard neoclassical cross-wage and income effects but also through household bargaining and gender norm effects, in which the relative incomes of husband and wife affect the distribution of marital surplus, marital satisfaction, and marital stability.
Marriage market characteristics affect marital status and spouse characteristics, as well as the outside option, and therefore bargaining power, within marriage. Marriage market characteristics can therefore affect premarital investments, which ultimately affect labor market outcomes within marriage and also affect labor supply decisions within marriage conditional on these premarital investments.
Simon van Norden
Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics.
Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities.
Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.