141-160 of 214 Results

Article

Nuno Garoupa

Law and economics is an important, growing field of specialization for both legal scholars and economists. It applies efficiency analysis to property, contracts, torts, procedure, and many other areas of the law. The use of economics as a methodology for understanding law is not immune to criticism. The rationality assumption and the efficiency principle have been intensively debated. Overall, the field has advanced in recent years by incorporating insights from psychology and other social sciences. In that respect, many questions concerning the efficiency of legal rules and norms are still open and respond to a multifaceted balance among diverse costs and benefits. The role of courts in explaining economic performance is a more specific area of analysis that emerged in the late 1990s. The relationship between law and economic growth is complex and debatable. An important literature has pointed to significant differences at the macro-level between the Anglo-American common law family and the civil law families. Although these initial results have been heavily scrutinized, other important subjects have surfaced such as convergence of legal systems, transplants, infrastructure of legal systems, rule of law and development, among others.

Article

Life-cycle choices and outcomes over financial (e.g., savings, portfolio, work) and health-related variables (e.g., medical spending, habits, sickness, and mortality) are complex and intertwined. Indeed, labor/leisure choices can both affect and be conditioned by health outcomes, precautionary savings is determined by exposure to sickness and longevity risks, where the latter can both be altered through preventive medical and leisure decisions. Moreover, inevitable aging induces changes in the incentives and in the constraints for investing in one’s own health and saving resources for old age. Understanding these pathways poses numerous challenges for economic models. The life-cycle data is indicative of continuous declines in health statuses and associated increases in exposure to morbidity, medical expenses, and mortality risks, with accelerating post-retirement dynamics. Theory suggests that risk-averse and forward-looking agents should rely on available instruments to insure against these risks. Indeed, market- and state-provided health insurance (e.g., Medicare) cover curative medical expenses. High end-of-life home and nursing-home expenses can be hedged through privately or publicly provided (e.g., Medicaid) long-term care insurance. The risk of outliving one’s financial resources can be hedged through annuities. The risk of not living long enough can be insured through life insurance. In practice, however, the recourse to these hedging instruments remains less than predicted by theory. Slow-observed wealth drawdown after retirement is unexplained by bequest motives and suggests precautionary motives against health-related expenses. The excessive reliance on public pension (e.g., Social Security) and the post-retirement drop in consumption not related to work or health are both indicative of insufficient financial preparedness and run counter to consumption smoothing objectives. Moreover, the capacity to self-insure through preventive care and healthy habits is limited when aging is factored in. In conclusion, the observed health and financial life-cycle dynamics remain challenging for economic theory.

Article

The highly integrated world economy at the outbreak of World War I emerged from discoveries and technological change in previous centuries. Territories unknown to the economy of Eurasia offered profitable opportunities if capital and labor could be mobilized to cheaply produce products that could bear the high cost of transportation that prevailed before industrialization. In the 16th century, American monetary metals mined using European technology and local labor, and sold worldwide, had major repercussions, including increasing trade between Europe and Asia. From the mid-17th century, sugar and tobacco in the Americas, developed on the backs of imported African slaves, produced an Atlantic economy that included the mainland colonies of British America. In the 19th century, technological innovation became the main driving force. First, it cheapened textile production in Britain and creating a massive demand for raw cotton. Then technology radically reduced the cost of transportation on both land and sea. Lower transportation costs spurred greater international specialization and, equally importantly, brought frontiers in continental interiors into the world economy. During the later 19th century, commercial and financial institutions arose that supported increased global economic integration.

Article

Traditional historiography has overestimated the significance of long-distance trade in the medieval economy. However, it could be argued that, because of its dynamic nature, long-distance trade played a more important role in economic development than its relative size would suggest. The term commercial revolution was introduced in the 1950s to refer to the rapid growth of European trade from about the 10th century. Long-distance trade then expanded, with the commercial integration of the two economic poles in the Mediterranean and in Flanders and the contiguous areas. It has been quantitatively shown that the integration of European markets began in the late medieval period, with rapid advancement beginning in the 16th century. The expansion of medieval trade has been attributed to advanced business techniques, such as the appearance of new forms of partnerships and novel financial and insurance systems. Many economic historians have also emphasized merchants’ relations, especially the establishment of networks to organize trade. More recently, major contributions to institutional economic history have focused on various economic institutions that reduced the uncertainties inherent in premodern economies. The early reputation-based institutions identified in the literature, such as the systems of the Maghribis in the Mediterranean, Champagne fairs in France, and the Italian city-states, were not optimal for changing conditions that accompanied expansion of trade, as the number of merchants increased and the relations among them became more anonymous, as generally happened during the Middle Ages. An intercommunal conciliation mechanism evolved in medieval northern Europe that supported trade among a large number of distant communities. This institution encouraged merchants to travel to distant towns and establish relations, even with persons they did not already know.

Article

Peter Robinson

Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry. As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.

Article

Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

Article

While machine learning (ML) methods have received a lot of attention in recent years, these methods are primarily for prediction. Empirical researchers conducting policy evaluations are, on the other hand, preoccupied with causal problems, trying to answer counterfactual questions: what would have happened in the absence of a policy? Because these counterfactuals can never be directly observed (described as the “fundamental problem of causal inference”) prediction tools from the ML literature cannot be readily used for causal inference. In the last decade, major innovations have taken place incorporating supervised ML tools into estimators for causal parameters such as the average treatment effect (ATE). This holds the promise of attenuating model misspecification issues, and increasing of transparency in model selection. One particularly mature strand of the literature include approaches that incorporate supervised ML approaches in the estimation of the ATE of a binary treatment, under the unconfoundedness and positivity assumptions (also known as exchangeability and overlap assumptions). This article begins by reviewing popular supervised machine learning algorithms, including trees-based methods and the lasso, as well as ensembles, with a focus on the Super Learner. Then, some specific uses of machine learning for treatment effect estimation are introduced and illustrated, namely (1) to create balance among treated and control groups, (2) to estimate so-called nuisance models (e.g., the propensity score, or conditional expectations of the outcome) in semi-parametric estimators that target causal parameters (e.g., targeted maximum likelihood estimation or the double ML estimator), and (3) the use of machine learning for variable selection in situations with a high number of covariates. Since there is no universal best estimator, whether parametric or data-adaptive, it is best practice to incorporate a semi-automated approach than can select the models best supported by the observed data, thus attenuating the reliance on subjective choices.

Article

Charles Ka Yui Leung and Cho Yiu Joe Ng

This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.

Article

While it is a long-standing idea in international macroeconomic theory that flexible nominal exchange rates have the potential to facilitate adjustment in international relative prices, a monetary union necessarily forgoes this mechanism for facilitating macroeconomic adjustment among its regions. Twenty years of experience in the eurozone monetary union, including the eurozone crisis, have spurred new macroeconomic research on the costs of giving up nominal exchange rates as a tool of adjustment, and the possibility of alternative policies to promote macroeconomic adjustment. Empirical evidence paints a mixed picture regarding the usefulness of nominal exchange rate flexibility: In many historical settings, flexible nominal exchanges rates tend to create more relative price distortions than they have helped resolve; yet, in some contexts exchange rate devaluations can serve as a useful correction to severe relative price misalignments. Theoretical advances in studying open economy models either support the usefulness of exchange rate movements or find them irrelevant, depending on the specific characteristics of the model economy, including the particular specification of nominal rigidities, international openness in goods markets, and international financial integration. Yet in models that embody certain key aspects of the countries suffering the brunt of the eurozone crisis, such as over-borrowing and persistently high wages, it is found that nominal devaluation can be useful to prevent the type of excessive rise in unemployment observed. This theoretical research also raises alternative polices and mechanisms to substitute for nominal exchange rate adjustment. These policies include the standard fiscal tools of optimal currency area theory but also extend to a broader set of tools including import tariffs, export subsidies, and prudential taxes on capital flows. Certain combinations of these policies, labeled a “fiscal devaluation,” have been found in theory to replicate the effects of a currency devaluation in the context of a monetary union such as the eurozone. These theoretical developments are helpful for understanding the history of experiences in the eurozone, such as the eurozone crisis. They are also helpful for thinking about options for preventing such crises in the future.

Article

Marriage and labor market outcomes are deeply related, particularly for women. A large literature finds that the labor supply decisions of married women respond to their husbands’ employment status, wages, and job characteristics. There is also evidence that the effects of spouse characteristics on labor market outcomes operate not just through standard neoclassical cross-wage and income effects but also through household bargaining and gender norm effects, in which the relative incomes of husband and wife affect the distribution of marital surplus, marital satisfaction, and marital stability. Marriage market characteristics affect marital status and spouse characteristics, as well as the outside option, and therefore bargaining power, within marriage. Marriage market characteristics can therefore affect premarital investments, which ultimately affect labor market outcomes within marriage and also affect labor supply decisions within marriage conditional on these premarital investments.

Article

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.

Article

José Luis Pinto-Prades, Arthur Attema, and Fernando Ignacio Sánchez-Martínez

Quality-adjusted life years (QALYs) are one of the main health outcomes measures used to make health policy decisions. It is assumed that the objective of policymakers is to maximize QALYs. Since the QALY weighs life years according to their health-related quality of life, it is necessary to calculate those weights (also called utilities) in order to estimate the number of QALYs produced by a medical treatment. The methodology most commonly used to estimate utilities is to present standard gamble (SG) or time trade-off (TTO) questions to a representative sample of the general population. It is assumed that, in this way, utilities reflect public preferences. Two different assumptions should hold for utilities to be a valid representation of public preferences. One is that the standard (linear) QALY model has to be a good model of how subjects value health. The second is that subjects should have consistent preferences over health states. Based on the main assumptions of the popular linear QALY model, most of those assumptions do not hold. A modification of the linear model can be a tractable improvement. This suggests that utilities elicited under the assumption that the linear QALY model holds may be biased. In addition, the second assumption, namely that subjects have consistent preferences that are estimated by asking SG or TTO questions, does not seem to hold. Subjects are sensitive to features of the elicitation process (like the order of questions or the type of task) that should not matter in order to estimate utilities. The evidence suggests that questions (TTO, SG) that researchers ask members of the general population produce response patterns that do not agree with the assumption that subjects have well-defined preferences when researchers ask them to estimate the value of health states. Two approaches can deal with this problem. One is based on the assumption that subjects have true but biased preferences. True preferences can be recovered from biased ones. This approach is valid as long as the theory used to debias is correct. The second approach is based on the idea that preferences are imprecise. In practice, national bodies use utilities elicited using TTO or SG under the assumptions that the linear QALY model is a good enough representation of public preferences and that subjects’ responses to preference elicitation methods are coherent.

Article

Taxation and public spending are key policy levers the state has in its power to change the distribution of income determined both by market forces and institutions and the prevailing distribution of wealth and property. One of the most commonly used methods to measure the distributional impact of a country’s taxes and public spending is fiscal incidence analysis. Rooted in the field of public finance, fiscal incidence analysis is designed to measure who bears the burden of taxes and who receives the benefits of government spending, and who are the gainers and losers of particular tax reforms or changes to welfare programs. Fiscal incidence analysis can be used to assess the redistributive impact of a fiscal system as a whole or changes of specific fiscal instruments. In particular, fiscal incidence analysis is used to address the following questions: Who bears the burden of taxation and who receives the benefits of public spending? How much income redistribution is being accomplished through taxation and public spending? What is the impact of taxation and public spending on poverty and the poor? How equalizing are specific taxes and government welfare programs? How progressive are spending on education and health? How effective are taxes and government spending in reducing inequality and poverty? Who are the losers and winners of tax and welfare programs reforms? A sample of key indicators meant to address these questions are discussed here. Real time analysis of winners and losers plays an important role in shaping the policy debate in a number of countries. In practice, fiscal incidence analysis is the method utilized to allocate taxes and public spending to households so that one can compare incomes before taxes and transfers with incomes after them. Standard fiscal incidence analysis just looks at what is paid and what is received without assessing the behavioral responses that taxes and public spending may trigger on individuals or households. This is often referred to as the “accounting approach.” Although the theory is quite straightforward, its application can be fraught with complications. The salient ones are discussed here. While ignoring behavioral responses and general equilibrium effects is a limitation of the accounting approach, the effects calculated with this method are considered a reasonable approximation of the short-run welfare impact. Fiscal incidence analysis, however, can be designed to include behavioral responses as well as general equilibrium and intertemporal effects. This article focuses on the implementation of fiscal incidence analysis using the accounting approach.

Article

David A. Hyman and Charles Silver

Medical malpractice is the best studied aspect of the civil justice system. But the subject is complicated, and there are heated disputes about basic facts. For example, are premium spikes driven by factors that are internal (i.e., number of claims, payout per claim, and damage costs) or external to the system? How large (or small) is the impact of a damages cap? Do caps have a bigger impact on the number of cases that are brought or the payment in the cases that remain? Do blockbuster verdicts cause defendants to settle cases for more than they are worth? Do caps attract physicians? Do caps reduce healthcare spending—and by how much? How much does it cost to resolve the high percentage of cases in which no damages are recovered? What is the comparative impact of a cap on noneconomic damages versus a cap on total damages? Other disputes involve normative questions. Is there too much med mal litigation or not enough? Are damage caps fair? Is the real problem bad doctors or predatory lawyers—or some combination of both? This article summarizes the empirical research on the performance of the med mal system, and highlights some areas for future research.

Article

Despite the aggregate value of M&A market transactions amounting to several trillions dollars on an annual basis, acquiring firms often underperform relative to non-acquiring firms, especially in public takeovers. Although hundreds of academic studies have investigated the deal- and firm-level factors associated with M&A announcement returns, many factors that increase M&A performance in the short run fail to relate to sustained long-run returns. In order to understand value creation in M&As, it is key to identify the firm and deal characteristics that can reliably predict long-run performance. Broadly speaking, long-run underperformance in M&A deals results from poor acquirer governance (reflected by CEO overconfidence and a lack of (institutional) shareholder monitoring) as well as from poor merger execution and integration (as captured by the degree of acquirer-target relatedness in the post-merger integration process). Although many more dimensions affect immediate deal transaction success, their effect on long-run performance is non-existent, or mixed at best.

Article

Syed Abdul Hamid

Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities. Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.

Article

Martin D. D. Evans and Dagfinn Rime

An overview of research on the microstructure of foreign exchange (FX) markets is presented. We begin by summarizing the institutional features of FX trading and describe how they have evolved since the 1980s. We then explain how these features are represented in microstructure models of FX trading. Next, we describe the links between microstructure and traditional macro exchange-rate models and summarize how these links have been explored in recent empirical research. Finally, we provide a microstructure perspective on two recent areas of interest in exchange-rate economics: the behavior of returns on currency portfolios, and questions of competition and regulation.

Article

Eric Ghysels

The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.

Article

Pieter van Baal and Hendriek Boshuizen

In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role, as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases. This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.

Article

Audrey Laporte and Brian S. Ferguson

One of the implications of the human capital literature of the 1960s was that a great many decisions individuals make that have consequences not just for the point in time when the decision is being made but also for the future can be thought of as involving investments in certain types of capital. In health economics, this led Michael Grossman to propose the concept of health capital, which refers not just to the individual’s illness status at any point in time, but to the more fundamental factors that affect the likelihood that she will be ill at any point in her life and also affect her life expectancy at each age. In Grossman’s model, an individual purchased health-related commodities that act through a health production function to improve her health. These commodities could be medical care, which could be seen as repair expenditures, or factors such as diet and exercise, which could be seen as ongoing additions to her health—the counterparts of adding savings to her financial capital on a regular basis. The individual was assumed to make decisions about her level of consumption of these commodities as part of an intertemporal utility-maximizing process that incorporated, through a budget constraint, the need to make tradeoffs between health-related goods and goods that had no health consequences. Pauline Ippolito showed that the same analytical techniques could be used to consider goods that were bad for health in the long run—bad diet and smoking, for example—still within the context of lifetime utility maximization. This raised the possibility that an individual might rationally take actions that were bad for her health in the long run. The logical extension of considering smoking as bad was adding recognition that smoking and other bad health habits were addictive. The notion of addictive commodities was already present in the literature on consumer behavior, but the consensus in that literature was that it was extremely difficult, if not impossible, to distinguish between a rational addict and a completely myopic consumer of addictive goods. Gary Becker and Kevin Murphy proposed an alternative approach to modeling a forward-looking, utility-maximizing consumer’s consumption of addictive commodities, based on the argument that an individual’s degree of addiction could be modeled as addiction capital, and which could be used to tackle the empirical problems that the consumer expenditure literature had experienced. That model has become the most widely used framework for empirical research by economists into the consumption of addictive goods, and, while the concept of rationality in addiction remains controversial, the Becker-Murphy framework also provides a basis for testing various alternative models of the consumption of addictive commodities, most notably those based on versions of time-inconsistent intertemporal decision making.