161-180 of 223 Results

Article

The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.

Article

Syed Abdul Hamid

Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities. Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.

Article

Martin D. D. Evans and Dagfinn Rime

An overview of research on the microstructure of foreign exchange (FX) markets is presented. We begin by summarizing the institutional features of FX trading and describe how they have evolved since the 1980s. We then explain how these features are represented in microstructure models of FX trading. Next, we describe the links between microstructure and traditional macro exchange-rate models and summarize how these links have been explored in recent empirical research. Finally, we provide a microstructure perspective on two recent areas of interest in exchange-rate economics: the behavior of returns on currency portfolios, and questions of competition and regulation.

Article

Eric Ghysels

The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.

Article

Pieter van Baal and Hendriek Boshuizen

In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role, as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases. This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.

Article

Audrey Laporte and Brian S. Ferguson

One of the implications of the human capital literature of the 1960s was that a great many decisions individuals make that have consequences not just for the point in time when the decision is being made but also for the future can be thought of as involving investments in certain types of capital. In health economics, this led Michael Grossman to propose the concept of health capital, which refers not just to the individual’s illness status at any point in time, but to the more fundamental factors that affect the likelihood that she will be ill at any point in her life and also affect her life expectancy at each age. In Grossman’s model, an individual purchased health-related commodities that act through a health production function to improve her health. These commodities could be medical care, which could be seen as repair expenditures, or factors such as diet and exercise, which could be seen as ongoing additions to her health—the counterparts of adding savings to her financial capital on a regular basis. The individual was assumed to make decisions about her level of consumption of these commodities as part of an intertemporal utility-maximizing process that incorporated, through a budget constraint, the need to make tradeoffs between health-related goods and goods that had no health consequences. Pauline Ippolito showed that the same analytical techniques could be used to consider goods that were bad for health in the long run—bad diet and smoking, for example—still within the context of lifetime utility maximization. This raised the possibility that an individual might rationally take actions that were bad for her health in the long run. The logical extension of considering smoking as bad was adding recognition that smoking and other bad health habits were addictive. The notion of addictive commodities was already present in the literature on consumer behavior, but the consensus in that literature was that it was extremely difficult, if not impossible, to distinguish between a rational addict and a completely myopic consumer of addictive goods. Gary Becker and Kevin Murphy proposed an alternative approach to modeling a forward-looking, utility-maximizing consumer’s consumption of addictive commodities, based on the argument that an individual’s degree of addiction could be modeled as addiction capital, and which could be used to tackle the empirical problems that the consumer expenditure literature had experienced. That model has become the most widely used framework for empirical research by economists into the consumption of addictive goods, and, while the concept of rationality in addiction remains controversial, the Becker-Murphy framework also provides a basis for testing various alternative models of the consumption of addictive commodities, most notably those based on versions of time-inconsistent intertemporal decision making.

Article

Multi-criteria decision analysis (MCDA) is increasingly used to support healthcare decision-making. MCDA involves decision makers evaluating the alternatives under consideration based on the explicit weighting of criteria relevant to the overarching decision—in order to, depending on the application, rank (or prioritize) or choose between the alternatives. A prominent example of MCDA applied to healthcare decision-making that has received a lot of attention in recent years and is the main subject of this article is choosing which health “technologies” (i.e., drugs, devices, procedures, etc.) to fund—a process known as health technology assessment (HTA). Other applications include prioritizing patients for surgery, prioritizing diseases for R&D, and decision-making about licensing treatments. Most applications are based on weighted-sum models. Such models involve explicitly weighting the criteria and rating the alternatives on the criteria, with each alternative’s “performance” on the criteria aggregated using a linear (i.e., additive) equation to produce the alternative’s “total score,” by which the alternatives are ranked. The steps involved in a MCDA process are explained, including an overview of methods for scoring alternatives on the criteria and weighting the criteria. The steps are: structuring the decision problem being addressed, specifying criteria, measuring alternatives’ performance, scoring alternatives on the criteria and weighting the criteria, applying the scores and weights to rank the alternatives, and presenting the MCDA results, including sensitivity analysis, to decision makers to support their decision-making. Arguments recently advanced against using MCDA for HTA and counterarguments are also considered. Finally, five questions associated with how MCDA for HTA is operationalized are discussed: Whose preferences are relevant for MCDA? Should criteria and weights be decision-specific or identical for repeated applications? How should cost or cost-effectiveness be included in MCDA? How can the opportunity cost of decisions be captured in MCDA? How can uncertainty be incorporated into MCDA?

Article

Ching-mu Chen and Shin-Kun Peng

For research attempting to investigate why economic activities are distributed unevenly across geographic space, new economic geography (NEG) provides a general equilibrium-based and microfounded approach to modeling a spatial economy characterized by a large variety of economic agglomerations. NEG emphasizes how agglomeration (centripetal) and dispersion (centrifugal) forces interact to generate observed spatial configurations and uneven distributions of economic activity. However, numerous economic geographers prefer to refer to the term new economic geographies as vigorous and diversified academic outputs that are inspired by the institutional-cultural turn of economic geography. Accordingly, the term geographical economics has been suggested as an alternative to NEG. Approaches for modeling a spatial economy through the use of a general equilibrium framework have not only rendered existing concepts amenable to empirical scrutiny and policy analysis but also drawn economic geography and location theories from the periphery to the center of mainstream economic theory. Reduced-form empirical studies have attempted to test certain implications of NEG. However, due to NEG’s simplified geographic settings, the developed NEG models cannot be easily applied to observed data. The recent development of quantitative spatial models based on the mechanisms formalized by previous NEG theories has been a breakthrough in building an empirically relevant framework for implementing counterfactual policy exercises. If quantitative spatial models can connect with observed data in an empirically meaningful manner, they can enable the decomposition of key theoretical mechanisms and afford specificity in the evaluation of the general equilibrium effects of policy interventions in particular settings. Several decades since its proposal, NEG has been criticized for its parsimonious assumptions about the economy across space and time. Therefore, existing challenges still require theoretical and quantitative models on new microfoundations pertaining to the interactions between economic agents across geographical space and the relationship between geography and economic development.

Article

Chao Gu, Han Han, and Randall Wright

This article provides an introduction to New Monetarist Economics. This branch of macro and monetary theory emphasizes imperfect commitment, information problems, and sometimes spatial (endogenously) separation as key frictions in the economy to derive endogenously institutions like monetary exchange or financial intermediation. We present three generations of models in development of New Monetarism. The first model studies an environment in which agents meet bilaterally and lack commitment, which allows money to be valued endogenously as means of payment. In this setup both goods and money are indivisible to keep things tractable. Second-generation models relax the assumption of indivisible goods and use bargaining theory (or related mechanisms) to endogenize prices. Variations of these models are applied to financial asset markets and intermediation. Assets and goods are both divisible in third-generation models, which makes them better suited to policy analysis and empirical work. This framework can also be used to help understand financial markets and liquidity.

Article

Vincenzo Atella and Joanna Kopinska

New sanitation and health technology applied to treatments, procedures, and devices is constantly revolutionizing epidemiological patterns. Since the early 1900s it has been responsible for significant improvements in population health by turning once-deadly diseases into curable or preventable conditions, by expanding the existing cures to more patients and diseases, and by simplifying procedures for both medical and organizational practices. Notwithstanding the benefits of technological progress for the population health, the innovation process is also an important driver of health expenditure growth across all countries. The technological progress generates additional financial burden and expands the volume of services provided, which constitutes a concern from an economic point of view. Moreover, the evolution of technology costs and their impact on healthcare spending is difficult to predict due to the revolutionary nature of many innovations and their adoption. In this respect, the challenge for policymakers is to discourage overadoption of ineffective, unnecessary, and inappropriate technologies. This task has been long carried out through regulation, which according to standard economic theory is the only response to market failures and socially undesirable outcomes of healthcare markets left on their own. The potential welfare loss of a market failure must be confronted with the costs of regulatory activities. While health technology evolution delivers important value for patients and societies, it will continue to pose important challenges for already overextended public finances.

Article

Karla DiazOrdaz and Richard Grieve

Health economic evaluations face the issues of noncompliance and missing data. Here, noncompliance is defined as non-adherence to a specific treatment, and occurs within randomized controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss-to-follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling noncompliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods with which to handle these with application to health economic evaluation that uses data from an RCT. In an RCT the random assignment can be used as an instrument-for-treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals’ costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, which assume the data are Missing At Random, but also sensitivity analyses that recognize the data may be missing according to the true, unobserved values, that is, Missing Not at Random. Future studies should subject the assumptions behind methods for handling noncompliance and missing data to thorough sensitivity analyses. Modern machine-learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of noncompliance and missing data.

Article

Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.

Article

The literature on optimum currency areas differs from that on other topics in economic theory in a number of notable respects. Most obviously, the theory is framed in verbal rather than mathematical terms. Mundell’s seminal article coining the term and setting out the theory’s basic propositions relied entirely on words rather than equations. The same was true of subsequent contributions focusing on the sectoral composition of activity and the role of fiscal flows. A handful of more recent articles specified and analyzed formal mathematical models of optimum currency areas. But it is safe to say that none of these has “taken off” in the sense of becoming the workhorse framework on which subsequent scholarship builds. The theoretical literature remains heavily qualitative and narrative compared to other areas of economic theory. While Mundell, McKinnon, Kenen, and the other founding fathers of optimum-currency-area theory provided powerful intuition, attempts to further formalize that intuition evidently contributed less to advances in economic understanding than has been the case for other theoretical literatures. Second, recent contributions to the literature on optimum currency areas are motivated to an unusual extent by a particular case, namely Europe’s monetary union. This was true already in the 1990s, when the EU’s unprecedented decision to proceed with the creation of the euro highlighted the question of whether Europe was an optimum currency area and, if not, how it might become one. That tendency was reinforced when Europe then descended into crisis starting in 2009. With only slight exaggeration it can be said that the literature on optimum currency areas became almost entirely a literature on Europe and on that continent’s failure to satisfy the relevant criteria. Third, the literature on optimum currency areas remains the product of its age. When the founders wrote, in the 1960s, banks were more strictly regulated, and financial markets were less internationalized than subsequently. Consequently, the connections between monetary integration and financial integration—whether monetary union requires banking union, as the point is now put—were neglected in the earlier literature. The role of cross-border financial flows as a destabilizing mechanism within a currency area did not receive the attention it deserved. Because much of that earlier literature was framed in a North American context—the question was whether the United States or Canada was an optimum currency area—and because it was asked by a trio of scholars, two of whom hailed from Canada and one of whom hailed from the United States, the challenges of reconciling monetary integration with political nationalism and the question of whether monetary requires political union were similarly underplayed. Given the euro area’s descent into crisis, a number of analysts have asked why economists didn’t sound louder warnings in advance. The answer is that their outlooks were shaped by a literature that developed in an earlier era when the risks and context were different.

Article

Bent Nielsen

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article. Detection of outliers is an important explorative step in empirical analysis. Once detected, the investigator will have to decide how to model the outliers depending on the context. Indeed, the outliers may represent noisy observations that are best left out of the analysis or they may be very informative observations that would have a particularly important role in the analysis. For regression analysis in time series a number of outlier algorithms are available, including impulse indicator saturation and methods from robust statistics. The algorithms are complex and their statistical properties are not fully understood. Extensive simulation studies have been made, but the formal theory is lacking. Some progress has been made toward an asymptotic theory of the algorithms. A number of asymptotic results are already available building on empirical process theory.

Article

Jun Li and Edward C. Norton

Pay-for-performance programs have become a prominent supply-side intervention to improve quality and decrease spending in health care, touching upon long-term care, acute care, and outpatient care. Pay-for-performance directly targets long-term care, with programs in nursing homes and home health. Indirectly, pay-for-performance programs targeting acute care settings affect clinical practice for long-term care providers through incentives for collaboration across settings. As a whole, pay-for-performance programs entail the identification of problems it seeks to solve, measurement of the dimensions it seeks to incentivize, methods to combine and translate performance to incentives, and application of the incentives to reward performance. For the long-term care population, pay-for-performance programs must also heed the unique challenges specific to the sector, such as patients with complex health needs and distinct health trajectories, and be structured to recognize the challenges of incentivizing performance improvement when there are multiple providers and payers involved in the care delivery. Although empirical results indicate modest effectiveness of pay-for-performance in long-term care on improving targeted measures, some research has provided more clarity on the role of pay-for-performance design on the output of the programs, highlighting room for future research. Further, because health care is interconnected, the indirect effects of pay-for-performance programs on long-term care is an underexplored topic. As the scope of pay-for-performance in long-term care expands, both within the United States and internationally, pay-for-performance offers ample opportunities for future research.

Article

Murray Z. Frank, Vidhan Goyal, and Tao Shen

The pecking order theory of corporate capital structure developed by states that issuing securities is subject to an adverse selection problem. Managers endowed with private information have incentives to issue overpriced risky securities. But they also understand that issuing such securities will result in a negative price reaction because rational investors, who are at an information disadvantage, will discount the prices of any risky securities the firm issues. Consequently, firms follow a pecking order: use internal resources when possible; if internal funds are inadequate, obtain external debt; external equity is the last resort. Large firms rely significantly on internal finance to meet their needs. External net debt issues finance the minor deficits that remain. Equity is not a significant source of financing for large firms. By contrast, small firms lack sufficient internal resources and obtain external finance. Although much of it is equity, there are substantial issues of debt by small firms. Firms are sorted into three portfolios based on whether they have a surplus or a deficit. About 15% of firm-year observations are in the surplus group. Firms primarily use surpluses to pay down debt. About 56% of firm-year observations are in the balance group. These firms generate internal cash flows that are just about enough to meet their investment and dividend needs. They issue debt, which is just enough to meet their debt repayments. They are relatively inactive in equity markets. About 29% of firm-year observations are in the deficit group. Deficits arise because of a combination of negative profitability and significant investments in both real and financial assets. Some financing patterns in the data are consistent with a pecking order: firms with moderate deficits favor debt issues; firms with very high deficits rely much more on equity than debt. Others are not: many equity-issuing firms do not seem to have entirely used up the debt capacity; some with a surplus issue equity. The theory suggests a sharp discontinuity in financing methods between surplus firms and deficit firms, and another at debt capacity. The literature provides little support for the predicted threshold effects. The theoretical work has shown that adverse selection does not necessarily lead to pecking order behavior. The pecking order is obtained only under special conditions. With both risky debt and equity being issued, there is often scope for many equilibria, and there is no clear basis for selecting among them. A pecking order may or may not emerge from the theory. Several articles show that the adverse selection problem can be solved by certain financing strategies or properly designed managerial contracts and can even disappear in dynamic models. Although adverse selection can generate a pecking order, it can also be caused by agency considerations, transaction costs, tax consideration, or behavioral decision-making considerations. Under standard tests in the literature, these alternative underlying motivations are commonly observationally equivalent.

Article

Stuti Khemani

“Reform” in the economics literature refers to changes in government policies or institutional rules because status-quo policies and institutions are not working well to achieve the goals of economic wellbeing and development. Further, reform refers to alternative policies and institutions that are available which would most likely perform better than the status quo. The main question examined in the “political economy of reform” literature has been why reforms are not undertaken when they are needed for the good of society. The succinct answer from the first generation of research is that conflict of interest between organized socio-political groups is responsible for some groups being able to stall reforms to extract greater private rents from status-quo policies. The next generation of research is tackling more fundamental and enduring questions: Why does conflict of interest persist? How are some interest groups able to exert influence against reforms if there are indeed large gains to be had for society? What institutions are needed to overcome the problem of credible commitment so that interest groups can be compensated or persuaded to support reforms? Game theory—or the analysis of strategic interactions among individuals and groups—is being used more extensively, going beyond the first generation of research which focused on the interaction between “winners” and “losers” from reforms. Widespread expectations, or norms, in society at large, not just within organized interest groups, about how others are behaving in the political sphere of making demands upon government; and, beliefs about the role of public policies, or preferences for public goods, shape these strategic interactions and hence reform outcomes. Examining where these norms and preferences for public goods come from, and how they evolve, are key to understanding why conflict of interest persists and how reformers can commit to finding common ground for socially beneficial reforms. Political markets and institutions, through which the leaders who wield power over public policy are selected and sanctioned, shape norms and preferences for public goods. Leaders who want to pursue reforms need to use the evidence in favor of reforms to build broad-based support in political markets. Contrary to the first generation view of reforms by stealth, the next generation of research suggests that public communication in political markets is needed to develop a shared understanding of policies for the public good. Concomitantly, the areas of reform have circled from market liberalization, which dominated the 20th century, back to strengthening governments to address problems of market failure and public goods in the 21st century. Reforms involve anti-corruption and public sector management in developing countries; improving health, education, and social protection to address persistent inequality in developed countries; and regulation to preserve competition and to price externalities (such as pollution and environmental depletion) in markets around the world. Understanding the functioning of politics is more important than ever before in determining whether governments are able to pursue reforms for public goods or fall prey to corruption and populism.

Article

Tort law is part of the common law that originated in England after the Norman Conquest and spread throughout the world, including to the United States. It is judge-made law that allows people who have been injured by others to sue those who harmed them and collect damages in proper cases. Since its early origins, tort law has evolved considerably and has become a full-fledged “grown order,” like the economy, and can best be understood by positive theory, also like the economy. Economic theories of tort have developed since the early 1970s, and they too have evolved over time. Their objective is to generate fresh insight about the purposes and the workings of the tort system. The basic thesis of the economic theory is that tort law creates incentives for people to minimize social cost, which is comprised of the harm produced by torts and the cost of the precautions necessary to prevent torts. This thesis, intentionally simple, generates many fresh insights about the workings and effects of the tort system and even about the actual legal rules that judges have developed. In an evolved grown order, legal rules are far less concrete than most people would expect though often very clear in application. Beginning also in the 1970s, legal philosophers have objected to the economic theory of tort and have devised philosophical theories that compete. The competition, moreover, has been productive because it has spurred both sides to revise and improve their theories and to seek better to understand the law. Tort law is diverse, applicable to many different activities and situations, so developing a positive theory about it is both challenging and rewarding.

Article

The interaction between poverty and social policy is an issue of longstanding interest in academic and policy circles. There are active debates on how to measure poverty, including where to draw the threshold determining whether a family is deemed to be living in poverty and how to measure resources available. These decisions have profound impacts on our understanding of the anti-poverty effectiveness of social welfare programs. In the context of the United States, focusing solely on cash income transfers shows little progress against poverty over the past 50 years, but substantive gains are obtained if the resource concept is expanded to include in-kind transfers and refundable tax credits. Beyond poverty, the research literature has examined the effects of social welfare policy on a host of outcomes such as labor supply, consumption, health, wealth, fertility, and marriage. Most of this work finds the disincentive effects of welfare programs on work, saving, and family structure to be small, but the income and consumption smoothing benefits to be sizable, and some recent work has found positive long-term effects of transfer programs on the health and education of children. More research is needed, however, on how to measure poverty, especially in the face of deteriorating quality of household surveys, on the long-term consequences of transfer programs, and on alternative designs of the welfare state.

Article

Jesús Gonzalo and Jean-Yves Pitarakis

Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.