You are looking at 161-180 of 204 articles
Jesús Gonzalo and Jean-Yves Pitarakis
Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.
James Lake and Pravin Krishna
In recent decades, there has been a dramatic proliferation of preferential trade agreements (PTAs) between countries that, while legal, contradict the non-discrimination principle of the world trade system. This raises various issues, both theoretical and empirical, regarding the evolution of trade policy within the world trade system and the welfare implications for PTA members and non-members. The survey starts with the Kemp-Wan-Ohyama and Panagariya-Krishna analyses in the literature that theoretically show PTAs can always be constructed so that they (weakly) increase the welfare of members and non-members. Considerable attention is then devoted to recent developments on the interaction between PTAs and multilateral trade liberalization, focusing on two key incentives: an “exclusion incentive” of PTA members and a “free riding incentive” of PTA non-members. While the baseline presumption one should have in mind is that these incentives lead PTAs to inhibit the ultimate degree of global trade liberalization, this presumption can be overturned when dynamic considerations are taken into account or when countries can negotiate the degree of multilateral liberalization rather than facing a binary choice over global free trade. Promising areas for pushing this theoretical literature forward include the growing use of quantitative trade models, incorporating rules of origin and global value chains, modeling the issues surrounding “mega-regional” agreements, and modelling the possibility of exit from PTAs. Empirical evidence in the literature is mixed regarding whether PTAs lead to trade diversion or trade creation, whether PTAs have significant adverse effects on non-member terms-of-trade, whether PTAs lead members to lower external tariffs on non-members, and the role of PTAs in facilitating deep integration among members.
Payment systems based on fixed prices have become the dominant model to finance hospitals across OECD countries. In the early 1980s, Medicare in the United States introduced the Diagnosis Related Groups (DRG) system. The idea was that hospitals should be paid a fixed price for treating a patient within a given diagnosis or treatment. The system then spread to other European countries (e.g., France, Germany, Italy, Norway, Spain, the United Kingdom) and high-income countries (e.g., Canada, Australia). The change in payment system was motivated by concerns over rapid health expenditure growth, and replaced financing arrangements based on reimbursing costs (e.g., in the United States) or fixed annual budgets (e.g., in the United Kingdom).
A more recent policy development is the introduction of pay-for-performance (P4P) schemes, which, in most cases, pay directly for higher quality. This is also a form of regulated price payment but the unit of payment is a (process or outcome) measure of quality, as opposed to activity, that is admitting a patient with a given diagnosis or a treatment.
Fixed price payment systems, either of the DRG type or the P4P type, affect hospital incentives to provide quality, contain costs, and treat the right patients (allocative efficiency). Quality and efficiency are ubiquitous policy goals across a range of countries.
Fixed price regulation induces providers to contain costs and, under certain conditions (e.g., excess demand), offer some incentives to sustain quality. But payment systems in the health sector are complex. Since its inception, DRG systems have been continuously refined. From their initial (around) 500 tariffs, many DRG codes have been split in two or more finer ones to reflect heterogeneity in costs within each subgroup. In turn, this may give incentives to provide excessive intensive treatments or to code patients in more remunerative tariffs, a practice known as upcoding. Fixed prices also make it financially unprofitable to treat high cost patients. This is particularly problematic when patients with the highest costs have the largest benefits from treatment. Hospitals also differ systematically in costs and other dimensions, and some of these external differences are beyond their control (e.g., higher cost of living, land, or capital). Price regulation can be put in place to address such differences.
The development of information technology has allowed constructing a plethora of quality indicators, mostly process measures of quality and in some cases health outcomes. These have been used both for public reporting, to help patients choose providers, but also for incentive schemes that directly pay for quality. P4P schemes are attractive but raise new issues, such as they might divert provider attention and unincentivized dimensions of quality might suffer as a result.
Pharmaceutical expenditure accounts for approximately 20% of healthcare expenditure across the Organisation for Economic Cooperation and Development (OECD) countries. Pharmaceutical products are regulated in all major global markets primarily to ensure product quality but also to regulate the reimbursed prices of insurance companies and central purchasing authorities that dominate this sector. Price regulation is justified as patent protection, which acts as an incentive to invest in R&D given the difficulties in appropriating the returns to such activity, creates monopoly rights to suppliers. Price regulation does itself reduce the ability of producers’ to recapture the substantial R&D investment costs incurred. Traditional price regulation through Ramsey pricing and yardstick competition is not efficient given the distortionary impact of insurance holdings, which are extensive in this sector and the inherent uncertainties that characterize Research and Development (R&D) activity. A range of other pricing regulations aimed at establishing pharmaceutical reimbursement that covers both dynamic efficiency (tied to R&D incentives) and static efficiency (tied to reducing monopoly rents) have been suggested. These range from cost-plus pricing, to internal and external reference pricing, rate-of-return pricing and, most recently value-based (essential health benefit maximization) pricing. Reimbursed prices reflecting value based pricing are, in some countries, associated with clinical treatment guidelines and cost-effectiveness analysis. Some countries are also requiring or allowing post-launch price regulation thorough a range of patient access agreements based on predefined population health targets and/or financial incentives. There is no simple, single solution to the determination of dynamic and static efficiency in this sector given the uncertainty associated with innovation, the large monopoly interests in the area, the distortionary impact of health insurance and the informational asymmetries that exist across providers and purchasers.
Kamal Saggi and Olena Ivus
Longstanding international frictions over uneven levels of protection granted to intellectual property rights (IPR) in different parts of the world culminated in 1995 in the form of the Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS)—a multilateral trade agreement that all member countries of the World Trade Organization (WTO) are obligated to follow. This landmark agreement was controversial from the start since it required countries with dramatically different economic and technological capabilities to abide by essentially the same rules and regulations with respect to IPRs, with some temporary leeway granted to developing and least developed countries. As one might expect, developing countries objected to the agreement on philosophical and practical grounds while developed countries, especially the United States, championed it strongly.
Over the years, a vast and rich economics literature has emerged that helps understand this international divide. More specifically, several fundamental issues related to the protection of IPRs in the global economy have been addressed: are IPRs trade-related? Do the incentives for patent protection of an open economy differ from those of a closed one and, if so, why? What is the rationale for international coordination over national patent policies? Why do developed and developing countries have such radically different views regarding the protection of IPRs? What is the level of empirical support underlying the major arguments for and against the TRIPS-mandated strengthening of IPRs in the world economy? Can the core obligations of the TRIPS Agreement as well as the flexibilities it contains be justified on the basis of economic logic? We discuss the key conclusions that can be drawn from decades of rigorous theoretical and empirical research and also offer some suggestions for future work.
The concept of soft budget constraint, describes a situation where a decision-maker finds it impossible to keep an agent to a fixed budget. In healthcare it may refer to a (nonprofit) hospital that overspends, or to a lower government level that does not balance its accounts. The existence of a soft budget constraint may represent an optimal policy from the regulator point of view only in specific settings. In general, its presence may allow for strategic behavior that changes considerably its nature and its desirability. In this article, soft budget constraint will be analyzed along two lines: from a market perspective and from a fiscal federalism perspective.
The creation of an internal market for healthcare has made hospitals with different objectives and constraints compete together. The literature does not agree on the effects of competition on healthcare or on which type of organizations should compete. Public hospitals are often seen as less efficient providers, but they are also intrinsically motivated and/or altruistic. Competition for quality in a market where costs are sunk and competitors have asymmetric objectives may produce regulatory failures; for this reason, it might be optimal to implement soft budget constraint rules to public hospitals even at the risk of perverse effects. Several authors have attempted to estimate the presence of soft budget constraint, showing that they derive from different strategic behaviors and lead to quite different outcomes.
The reforms that have reshaped public healthcare systems across Europe have often been accompanied by a process of devolution; in some countries it has often been accompanied by widespread soft budget constraint policies. Medicaid expenditure in the United States is becoming a serious concern for the Federal Government and the evidence from other states is not reassuring. Several explanations have been proposed: (a) local governments may use spillovers to induce neighbors to pay for their local public goods; (b) size matters: if the local authority is sufficiently big, the center will bail it out; equalization grants and fiscal competition may be responsible for the rise of soft budget constraint policies. Soft budget policies may also derive from strategic agreements among lower tiers, or as a consequence of fiscal imbalances. In this context the optimal use of soft budget constraint as a policy instrument may not be desirable.
Menzie D. Chinn
The idea that prices and exchange rates adjust so as to equalize the common-currency price of identical bundles of goods—purchasing power parity (PPP)—is a topic of central importance in international finance. If PPP holds continuously, then nominal exchange rate changes do not influence trade flows. If PPP does not hold in the short run, but does in the long run, then monetary factors can affect the real exchange rate only temporarily. Substantial evidence has accumulated—with the advent of new statistical tests, alternative data sets, and longer spans of data—that purchasing power parity does not typically hold in the short run. One reason why PPP doesn’t hold in the short run might be due to sticky prices, in combination with other factors, such as trade barriers. The evidence is mixed for the longer run. Variations in the real exchange rate in the longer run can also be driven by shocks to demand, arising from changes in government spending, the terms of trade, as well as wealth and debt stocks. At time horizon of decades, trend movements in the real exchange rate—that is, systematically trending deviations in PPP—could be due to the presence of nontraded goods, combined with real factors such as differentials in productivity growth. The well-known positive association between the price level and income levels—also known as the “Penn Effect”—is consistent with this channel. Whether PPP holds then depends on the time period, the time horizon, and the currencies examined.
The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests.
The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low).
As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.
Joanna Coast and Manuela De Allegri
Qualitative methods are being used increasingly by health economists, but most health economists are not trained in these methods and may need to develop expertise in this area. This article discusses important issues of ontology, epistemology, and research design, before addressing the key issues of sampling, data collection, and data analysis in qualitative research. Understanding differences in the purpose of sampling between qualitative and quantitative methods is important for health economists, and the key notion of purposeful sampling is described. The section on data collection covers in-depth and semistructured interviews, focus-group discussions, and observation. Methods for data analysis are then discussed, with a particular focus on the use of inductive methods that are appropriate for economic purposes. Presentation and publication are briefly considered, before three areas that have seen substantial use of qualitative methods are explored: attribute development for discrete choice experiment, priority-setting research, and health financing initiatives.
Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli
In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries.
The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features.
The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.
Samuel Berlinski and Marcos Vera-Hernández
Socioeconomic gradients in health, cognitive, and socioemotional skills start at a very early age. Well-designed policy interventions in the early years can have a great impact in closing these gaps. Advancing this line of research requires a thorough understanding of how households make human capital investment decisions on behalf of their children, what their information set is, and how the market, the environment, and government policies affect them. A framework for this research should describe how children’s skills evolve and how parents make choices about the inputs that model child development, as well as the rationale for government interventions, including both efficiency and equity considerations.
Ijeoma Peace Edoka
Low- and middle-income countries (LMICs) bear a disproportionately high burden of diseases in comparison to high-income countries, partly due to inequalities in the distribution of resources for health. Recent increases in health spending in these countries demonstrate a commitment to tackling the high burden of disease. However, evidence on the extent to which increased spending on health translates to better population health outcomes has been inconclusive. Some studies have reported improvements in population health with an increase in health spending whereas others have either found no effect or very limited effect to justify increased financial allocations to health. Differences across studies may be explained by differences in approaches adopted in estimating returns to health spending in LMICs.
Iñigo Hernandez-Arenaz and Nagore Iriberri
Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations.
This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications.
An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.
Francisco H. G. Ferreira, Emanuela Galasso, and Mario Negre
“Shared prosperity” is a common phrase in current development policy discourse. Its most widely used operational definition—the growth rate in the average income of the poorest 40% of a country’s population—is a truncated measure of change in social welfare. A related concept, the shared prosperity premium—the difference between the growth rate of the mean for the bottom 40% and the growth rate in the overall mean—is similarly analogous to a measure of change in inequality. This article reviews the relationship between these concepts and the more established ideas of social welfare, poverty, inequality, and mobility.
Household survey data can be used to shed light on recent progress in terms of this indicator globally. During 2008–2013, mean incomes for the poorest 40% rose in 60 of the 83 countries for which we have data. In 49 of them, accounting for 65% of the sampled population, it rose faster than overall average incomes, thus narrowing the income gap.
In the policy space, there are examples both of “pre-distribution” policies (which promote human capital investment among the poor) and “re-distribution” policies (such as targeted safety nets), which when well-designed have a sound empirical track record of both raising productivity and improving well-being among the poor.
Ana Balsa and Carlos Díaz
Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.
Vivian Zhanwei Yue and Bin Wei
This article reviews the literature on sovereign debt, that is, debt issued by a national government. The defining characteristic of sovereign debt is the limited mechanisms for enforcement. Because a sovereign government does not face legal consequences of default, the reasons why it makes repayment are to avoid default penalties related to reputation loss or economic cost. Theoretical and quantitative studies on sovereign debt have investigated the cause and impact of sovereign default and produced analysis of policy relevance. This article reviews the theories that quantitatively account for key empirical facts about sovereign debt. These studies enable researchers and policy makers to better understand sovereign debt crises.
Many large cities are found at locations with certain geographic and historical advantages, or the first nature advantages. Yet those exogenous locational features may not be the most potent forces governing the spatial pattern and the size variation of cities. In particular, population size, spacing, and industrial composition of cities exhibit simple, persistent, and monotonic relationships that are often approximated by power laws. The extant theories of economic agglomeration explain some aspects of this regularity as a consequence of interactions between endogenous agglomeration and dispersion forces, or the second nature advantages.
To obtain results about explicit spatial patterns of cities, a model needs to depart from the most popular two-region and systems-of-cities frameworks in urban and regional economics in which the variation in interregional distance is assumed away in order to secure analytical tractability of the models. This is one of the major reasons that only few formal models have been proposed in this literature. To draw implications about the spatial patterns and sizes of cities from the extant theories, the behavior of the many-region extension of the existing two-region models is discussed in depth.
The mechanisms that link the spatial pattern of cities and the diversity in size as well as the diversity in industrial composition among cities are also discussed in detail, thought the relevant theories are much less available. For each aspect of the interdependence among spatial patterns, size distribution and industrial composition of cities, the concrete facts are drawn from Japanese data to guide the discussion.
Dynamic stochastic general equilibrium (DSGE) modeling can be structured around six key criticisms leveled at the approach. The first is fundamental and common to macroeconomics and microeconomics alike—namely, problems with rationality and expected utility maximization (EUM). The second is that DSGE models examine fluctuations about an exogenous balanced growth path and there is no role for endogenous growth. The third consists of a number of concerns associated with estimation. The fourth is another fundamental problem with any micro-founded macro-model—that of heterogeneity and aggregation. The fifth and sixth concern focus on the rudimentary nature of earlier models that lacked unemployment and a banking sector.
A widely used and referenced example of DSGE modeling is the Smets-Wouters (SW) medium-sized NK model. The model features rational expectations and, in an environment of uncertainty, EUM by households and firms. Preferences are consistent with a nonstochastic exogenous balanced growth path about which the model is solved. The model can be estimated by a Bayesian systems estimation method that involves four types of representative agents (households, final goods producers, trade unions, and intermediate good producers). The latter two produce differentiated labor and goods, respectively, and, in each period of time, consist of a proportion locked into existing contracts and the rest that can reoptimize. There is underemployment but no unemployment. Finally, an arbitrage condition imposed on the return on capital and bonds rules out financial frictions. Thus the model, which has become the gold standard for DSGE macro-modeling, features all six areas of concern. The model can be used as a platform to examine how the current generation of DSGE models has developed in these six dimensions. This modeling framework has also used for macro-economic policy design.
Anna Dreber and Magnus Johannesson
The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.
Richard C. van Kleef, Thomas G. McGuire, Frederik T. Schut, and Wynand P. M. M. van de Ven
Many countries rely on social health insurance supplied by competing insurers to enhance fairness and efficiency in healthcare financing. Premiums in these settings are typically community rated per health plan. Though community rating can help achieve fairness objectives, it also leads to a variety of problems due to risk selection, that is, actions by consumers and insurers to exploit “unpriced risk” heterogeneity. From the viewpoint of a consumer, unpriced risk refers to the gap between her expected spending under a health plan and the net premium for that plan. Heterogeneity in unpriced risk can lead to selection by consumers in and out of insurance and between high- and low-value plans. These forms of risk selection can result in upward premium spirals, inefficient take-up of basic coverage, and inefficient sorting of consumers between high- and low-value plans.
From the viewpoint of an insurer, unpriced risk refers to the gap between his expected costs under a certain contract and the revenues he receives for that contract. Heterogeneity in unpriced risk incentivizes insurers to alter their plan offerings in order to attract profitable people, resulting in inefficient plan design and possibly in the unavailability of high-quality care. Moreover, insurers have incentives to target profitable people via marketing tools and customer service, which—from a societal perspective—can be considered a waste of resources.
Common tools to counteract selection problems are risk equalization, risk sharing, and risk rating of premiums. All three strategies reduce unpriced risk heterogeneity faced by insurers and thus diminish selection actions by insurers such as the altering of plan offerings. Risk rating of premiums also reduces unpriced risk heterogeneity faced by consumers and thus mitigates selection in and out of insurance and between high- and low-value plans. All three strategies, however, come with trade-offs. A smart blend takes advantage of the strengths, while reducing the weaknesses of each strategy. The optimal payment system configuration will depend on how a regulator weighs fairness and efficiency and on how the healthcare system is organized.