181-200 of 217 Results

Article

Longstanding international frictions over uneven levels of protection granted to intellectual property rights (IPR) in different parts of the world culminated in 1995 in the form of the Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS)—a multilateral trade agreement that all member countries of the World Trade Organization (WTO) are obligated to follow. This landmark agreement was controversial from the start since it required countries with dramatically different economic and technological capabilities to abide by essentially the same rules and regulations with respect to IPRs, with some temporary leeway granted to developing and least developed countries. As one might expect, developing countries objected to the agreement on philosophical and practical grounds while developed countries, especially the United States, championed it strongly. Over the years, a vast and rich economics literature has emerged that helps understand this international divide. More specifically, several fundamental issues related to the protection of IPRs in the global economy have been addressed: are IPRs trade-related? Do the incentives for patent protection of an open economy differ from those of a closed one and, if so, why? What is the rationale for international coordination over national patent policies? Why do developed and developing countries have such radically different views regarding the protection of IPRs? What is the level of empirical support underlying the major arguments for and against the TRIPS-mandated strengthening of IPRs in the world economy? Can the core obligations of the TRIPS Agreement as well as the flexibilities it contains be justified on the basis of economic logic? We discuss the key conclusions that can be drawn from decades of rigorous theoretical and empirical research and also offer some suggestions for future work.

Article

Rosella Levaggi

The concept of soft budget constraint, describes a situation where a decision-maker finds it impossible to keep an agent to a fixed budget. In healthcare it may refer to a (nonprofit) hospital that overspends, or to a lower government level that does not balance its accounts. The existence of a soft budget constraint may represent an optimal policy from the regulator point of view only in specific settings. In general, its presence may allow for strategic behavior that changes considerably its nature and its desirability. In this article, soft budget constraint will be analyzed along two lines: from a market perspective and from a fiscal federalism perspective. The creation of an internal market for healthcare has made hospitals with different objectives and constraints compete together. The literature does not agree on the effects of competition on healthcare or on which type of organizations should compete. Public hospitals are often seen as less efficient providers, but they are also intrinsically motivated and/or altruistic. Competition for quality in a market where costs are sunk and competitors have asymmetric objectives may produce regulatory failures; for this reason, it might be optimal to implement soft budget constraint rules to public hospitals even at the risk of perverse effects. Several authors have attempted to estimate the presence of soft budget constraint, showing that they derive from different strategic behaviors and lead to quite different outcomes. The reforms that have reshaped public healthcare systems across Europe have often been accompanied by a process of devolution; in some countries it has often been accompanied by widespread soft budget constraint policies. Medicaid expenditure in the United States is becoming a serious concern for the Federal Government and the evidence from other states is not reassuring. Several explanations have been proposed: (a) local governments may use spillovers to induce neighbors to pay for their local public goods; (b) size matters: if the local authority is sufficiently big, the center will bail it out; equalization grants and fiscal competition may be responsible for the rise of soft budget constraint policies. Soft budget policies may also derive from strategic agreements among lower tiers, or as a consequence of fiscal imbalances. In this context the optimal use of soft budget constraint as a policy instrument may not be desirable.

Article

The idea that prices and exchange rates adjust so as to equalize the common-currency price of identical bundles of goods—purchasing power parity (PPP)—is a topic of central importance in international finance. If PPP holds continuously, then nominal exchange rate changes do not influence trade flows. If PPP does not hold in the short run, but does in the long run, then monetary factors can affect the real exchange rate only temporarily. Substantial evidence has accumulated—with the advent of new statistical tests, alternative data sets, and longer spans of data—that purchasing power parity does not typically hold in the short run. One reason why PPP doesn’t hold in the short run might be due to sticky prices, in combination with other factors, such as trade barriers. The evidence is mixed for the longer run. Variations in the real exchange rate in the longer run can also be driven by shocks to demand, arising from changes in government spending, the terms of trade, as well as wealth and debt stocks. At time horizon of decades, trend movements in the real exchange rate—that is, systematically trending deviations in PPP—could be due to the presence of nontraded goods, combined with real factors such as differentials in productivity growth. The well-known positive association between the price level and income levels—also known as the “Penn Effect”—is consistent with this channel. Whether PPP holds then depends on the time period, the time horizon, and the currencies examined.

Article

The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests. The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low). As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.

Article

Joanna Coast and Manuela De Allegri

Qualitative methods are being used increasingly by health economists, but most health economists are not trained in these methods and may need to develop expertise in this area. This article discusses important issues of ontology, epistemology, and research design, before addressing the key issues of sampling, data collection, and data analysis in qualitative research. Understanding differences in the purpose of sampling between qualitative and quantitative methods is important for health economists, and the key notion of purposeful sampling is described. The section on data collection covers in-depth and semistructured interviews, focus-group discussions, and observation. Methods for data analysis are then discussed, with a particular focus on the use of inductive methods that are appropriate for economic purposes. Presentation and publication are briefly considered, before three areas that have seen substantial use of qualitative methods are explored: attribute development for discrete choice experiment, priority-setting research, and health financing initiatives.

Article

Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli

In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries. The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features. The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.

Article

Samuel Berlinski and Marcos Vera-Hernández

Socioeconomic gradients in health, cognitive, and socioemotional skills start at a very early age. Well-designed policy interventions in the early years can have a great impact in closing these gaps. Advancing this line of research requires a thorough understanding of how households make human capital investment decisions on behalf of their children, what their information set is, and how the market, the environment, and government policies affect them. A framework for this research should describe how children’s skills evolve and how parents make choices about the inputs that model child development, as well as the rationale for government interventions, including both efficiency and equity considerations.

Article

Low- and middle-income countries (LMICs) bear a disproportionately high burden of diseases in comparison to high-income countries, partly due to inequalities in the distribution of resources for health. Recent increases in health spending in these countries demonstrate a commitment to tackling the high burden of disease. However, evidence on the extent to which increased spending on health translates to better population health outcomes has been inconclusive. Some studies have reported improvements in population health with an increase in health spending whereas others have either found no effect or very limited effect to justify increased financial allocations to health. Differences across studies may be explained by differences in approaches adopted in estimating returns to health spending in LMICs.

Article

Iñigo Hernandez-Arenaz and Nagore Iriberri

Gender differences, both in entering negotiations and when negotiating, have been proved to exist: Men are usually more likely to enter into negotiation than women and when negotiating they obtain better deals than women. These gender differences help to explain the gender gap in wages, as starting salaries and wage increases or promotions throughout an individual’s career are often the result of bilateral negotiations. This article presents an overview of the literature on gender differences in negotiation. The article is organized in four main parts. The first section reviews the findings with respect to gender differences in the likelihood of engaging in a negotiation, that is, in deciding to start a negotiation. The second section discusses research on gender differences during negotiations, that is, while bargaining. The third section looks at the relevant psychological literature and discusses meta-analyses, looking for factors that trigger or moderate gender differences in negotiation, such as structural ambiguity and cultural traits. The fourth section presents a brief overview of research on gender differences in non- cognitive traits, such as risk and social preferences, confidence, and taste for competition, and their impact in explaining gender differences in bargaining. Finally, the fifth section discusses some policy implications. An understanding of when gender differences are likely to arise on entering into negotiations and when negotiating will enable policies to be created that can mitigate current gender differences in negotiations. This is an active, promising research line.

Article

Francisco H. G. Ferreira, Emanuela Galasso, and Mario Negre

“Shared prosperity” is a common phrase in current development policy discourse. Its most widely used operational definition—the growth rate in the average income of the poorest 40% of a country’s population—is a truncated measure of change in social welfare. A related concept, the shared prosperity premium—the difference between the growth rate of the mean for the bottom 40% and the growth rate in the overall mean—is similarly analogous to a measure of change in inequality. This article reviews the relationship between these concepts and the more established ideas of social welfare, poverty, inequality, and mobility. Household survey data can be used to shed light on recent progress in terms of this indicator globally. During 2008–2013, mean incomes for the poorest 40% rose in 60 of the 83 countries for which we have data. In 49 of them, accounting for 65% of the sampled population, it rose faster than overall average incomes, thus narrowing the income gap. In the policy space, there are examples both of “pre-distribution” policies (which promote human capital investment among the poor) and “re-distribution” policies (such as targeted safety nets), which when well-designed have a sound empirical track record of both raising productivity and improving well-being among the poor.

Article

Health behaviors are a major source of morbidity and mortality in the developed and much of the developing world. The social nature of many of these behaviors, such as eating or using alcohol, and the normative connotations that accompany others (i.e., sexual behavior, illegal drug use) make them quite susceptible to peer influence. This chapter assesses the role of social interactions in the determination of health behaviors. It highlights the methodological progress of the past two decades in addressing the multiple challenges inherent in the estimation of peer effects, and notes methodological issues that still need to be confronted. A comprehensive review of the economics empirical literature—mostly for developed countries—shows strong and robust peer effects across a wide set of health behaviors, including alcohol use, body weight, food intake, body fitness, teen pregnancy, and sexual behaviors. The evidence is mixed when assessing tobacco use, illicit drug use, and mental health. The article also explores the as yet incipient literature on the mechanisms behind peer influence and on new developments in the study of social networks that are shedding light on the dynamics of social influence. There is suggestive evidence that social norms and social conformism lie behind peer effects in substance use, obesity, and teen pregnancy, while social learning has been pointed out as a channel behind fertility decisions, mental health utilization, and uptake of medication. Future research needs to deepen the understanding of the mechanisms behind peer influence in health behaviors in order to design more targeted welfare-enhancing policies.

Article

Vivian Zhanwei Yue and Bin Wei

This article reviews the literature on sovereign debt, that is, debt issued by a national government. The defining characteristic of sovereign debt is the limited mechanisms for enforcement. Because a sovereign government does not face legal consequences of default, the reasons why it makes repayment are to avoid default penalties related to reputation loss or economic cost. Theoretical and quantitative studies on sovereign debt have investigated the cause and impact of sovereign default and produced analysis of policy relevance. This article reviews the theories that quantitatively account for key empirical facts about sovereign debt. These studies enable researchers and policy makers to better understand sovereign debt crises.

Article

Elisa Tosetti, Rita Santos, Francesco Moscone, and Giuseppe Arbia

The spatial dimension of supply and demand factors is a very important feature of healthcare systems. Differences in health and behavior across individuals are due not only to personal characteristics but also to external forces, such as contextual factors, social interaction processes, and global health shocks. These factors are responsible for various forms of spatial patterns and correlation often observed in the data, which are desirable to include in health econometrics models. This article describes a set of exploratory techniques and econometric methods to visualize, summarize, test, and model spatial patterns of health economics phenomena, showing their scientific and policy power when addressing health economics issues characterized by a strong spatial dimension. Exploring and modeling the spatial dimension of the two-sided healthcare provision may help reduce inequalities in access to healthcare services and support policymakers in the design of financially sustainable healthcare systems.

Article

Many large cities are found at locations with certain geographic and historical advantages, or the first nature advantages. Yet those exogenous locational features may not be the most potent forces governing the spatial pattern and the size variation of cities. In particular, population size, spacing, and industrial composition of cities exhibit simple, persistent, and monotonic relationships that are often approximated by power laws. The extant theories of economic agglomeration explain some aspects of this regularity as a consequence of interactions between endogenous agglomeration and dispersion forces, or the second nature advantages. To obtain results about explicit spatial patterns of cities, a model needs to depart from the most popular two-region and systems-of-cities frameworks in urban and regional economics in which the variation in interregional distance is assumed away in order to secure analytical tractability of the models. This is one of the major reasons that only few formal models have been proposed in this literature. To draw implications about the spatial patterns and sizes of cities from the extant theories, the behavior of the many-region extension of the existing two-region models is discussed in depth. The mechanisms that link the spatial pattern of cities and the diversity in size as well as the diversity in industrial composition among cities are also discussed in detail, thought the relevant theories are much less available. For each aspect of the interdependence among spatial patterns, size distribution and industrial composition of cities, the concrete facts are drawn from Japanese data to guide the discussion.

Article

Dynamic stochastic general equilibrium (DSGE) modeling can be structured around six key criticisms leveled at the approach. The first is fundamental and common to macroeconomics and microeconomics alike—namely, problems with rationality and expected utility maximization (EUM). The second is that DSGE models examine fluctuations about an exogenous balanced growth path and there is no role for endogenous growth. The third consists of a number of concerns associated with estimation. The fourth is another fundamental problem with any micro-founded macro-model—that of heterogeneity and aggregation. The fifth and sixth concern focus on the rudimentary nature of earlier models that lacked unemployment and a banking sector. A widely used and referenced example of DSGE modeling is the Smets-Wouters (SW) medium-sized NK model. The model features rational expectations and, in an environment of uncertainty, EUM by households and firms. Preferences are consistent with a nonstochastic exogenous balanced growth path about which the model is solved. The model can be estimated by a Bayesian systems estimation method that involves four types of representative agents (households, final goods producers, trade unions, and intermediate good producers). The latter two produce differentiated labor and goods, respectively, and, in each period of time, consist of a proportion locked into existing contracts and the rest that can reoptimize. There is underemployment but no unemployment. Finally, an arbitrage condition imposed on the return on capital and bonds rules out financial frictions. Thus the model, which has become the gold standard for DSGE macro-modeling, features all six areas of concern. The model can be used as a platform to examine how the current generation of DSGE models has developed in these six dimensions. This modeling framework has also used for macro-economic policy design.

Article

The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.

Article

Stock-flow matching is a simple and elegant framework of dynamic trade in differentiated goods. Flows of entering traders match and exchange with the stocks of previously unsuccessful traders on the other side of the market. A buyer or seller who enters a market for a single, indivisible good such as a job or a home does not experience impediments to trade. All traders are fully informed about the available trading options; however, each of the available options in the stock on the other side of the market may or may not be suitable. If fortunate, this entering trader immediately finds a viable option in the stock of available opportunities and trade occurs straightaway. If unfortunate, none of the available opportunities suit the entrant. This buyer or seller now joins the stocks of unfulfilled traders who must wait for a new, suitable partner to enter. Three striking empirical regularities emerge from this microstructure. First, as the stock of buyers does not match with the stock of sellers, but with the flow of new sellers, the flow of new entrants becomes an important explanatory variable for aggregate trading rates. Second, the traders’ exit rates from the market are initially high, but if they fail to match quickly the exit rates become substantially slower. Third, these exit rates depend on different variables at different phases of an agent’s stay in the market. The probability that a new buyer will trade successfully depends only on the stock of sellers in the market. In contrast, the exit rate of an old buyer depends positively on the flow of new sellers, negatively on the stock of old buyers, and is independent of the stock of sellers. These three empirical relationships not only differ from those found in the familiar search literature but also conform to empirical evidence observed from unemployment outflows. Moreover, adopting the stock-flow approach enriches our understanding of output dynamics, employment flows, and aggregate economic performance. These trading mechanics generate endogenous price dispersion and price dynamics—prices depend on whether the buyer or the seller is the recent entrant, and on how many viable traders were waiting for the entrant, which varies over time. The stock-flow structure has provided insights about housing, temporary employment, and taxicab markets.

Article

Richard C. van Kleef, Thomas G. McGuire, Frederik T. Schut, and Wynand P. M. M. van de Ven

Many countries rely on social health insurance supplied by competing insurers to enhance fairness and efficiency in healthcare financing. Premiums in these settings are typically community rated per health plan. Though community rating can help achieve fairness objectives, it also leads to a variety of problems due to risk selection, that is, actions by consumers and insurers to exploit “unpriced risk” heterogeneity. From the viewpoint of a consumer, unpriced risk refers to the gap between her expected spending under a health plan and the net premium for that plan. Heterogeneity in unpriced risk can lead to selection by consumers in and out of insurance and between high- and low-value plans. These forms of risk selection can result in upward premium spirals, inefficient take-up of basic coverage, and inefficient sorting of consumers between high- and low-value plans. From the viewpoint of an insurer, unpriced risk refers to the gap between his expected costs under a certain contract and the revenues he receives for that contract. Heterogeneity in unpriced risk incentivizes insurers to alter their plan offerings in order to attract profitable people, resulting in inefficient plan design and possibly in the unavailability of high-quality care. Moreover, insurers have incentives to target profitable people via marketing tools and customer service, which—from a societal perspective—can be considered a waste of resources. Common tools to counteract selection problems are risk equalization, risk sharing, and risk rating of premiums. All three strategies reduce unpriced risk heterogeneity faced by insurers and thus diminish selection actions by insurers such as the altering of plan offerings. Risk rating of premiums also reduces unpriced risk heterogeneity faced by consumers and thus mitigates selection in and out of insurance and between high- and low-value plans. All three strategies, however, come with trade-offs. A smart blend takes advantage of the strengths, while reducing the weaknesses of each strategy. The optimal payment system configuration will depend on how a regulator weighs fairness and efficiency and on how the healthcare system is organized.

Article

Alessandro Casini and Pierre Perron

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Article

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.