The pecking order theory of corporate capital structure developed by states that issuing securities is subject to an adverse selection problem. Managers endowed with private information have incentives to issue overpriced risky securities. But they also understand that issuing such securities will result in a negative price reaction because rational investors, who are at an information disadvantage, will discount the prices of any risky securities the firm issues. Consequently, firms follow a pecking order: use internal resources when possible; if internal funds are inadequate, obtain external debt; external equity is the last resort. Large firms rely significantly on internal finance to meet their needs. External net debt issues finance the minor deficits that remain. Equity is not a significant source of financing for large firms. By contrast, small firms lack sufficient internal resources and obtain external finance. Although much of it is equity, there are substantial issues of debt by small firms. Firms are sorted into three portfolios based on whether they have a surplus or a deficit. About 15% of firm-year observations are in the surplus group. Firms primarily use surpluses to pay down debt. About 56% of firm-year observations are in the balance group. These firms generate internal cash flows that are just about enough to meet their investment and dividend needs. They issue debt, which is just enough to meet their debt repayments. They are relatively inactive in equity markets. About 29% of firm-year observations are in the deficit group. Deficits arise because of a combination of negative profitability and significant investments in both real and financial assets. Some financing patterns in the data are consistent with a pecking order: firms with moderate deficits favor debt issues; firms with very high deficits rely much more on equity than debt. Others are not: many equity-issuing firms do not seem to have entirely used up the debt capacity; some with a surplus issue equity. The theory suggests a sharp discontinuity in financing methods between surplus firms and deficit firms, and another at debt capacity. The literature provides little support for the predicted threshold effects. The theoretical work has shown that adverse selection does not necessarily lead to pecking order behavior. The pecking order is obtained only under special conditions. With both risky debt and equity being issued, there is often scope for many equilibria, and there is no clear basis for selecting among them. A pecking order may or may not emerge from the theory. Several articles show that the adverse selection problem can be solved by certain financing strategies or properly designed managerial contracts and can even disappear in dynamic models. Although adverse selection can generate a pecking order, it can also be caused by agency considerations, transaction costs, tax consideration, or behavioral decision-making considerations. Under standard tests in the literature, these alternative underlying motivations are commonly observationally equivalent.
Murray Z. Frank, Vidhan Goyal, and Tao Shen
Jordan Everson and Melinda Beeuwkes Buntin
The potential for health information technology (HIT) to reshape the information-intensive healthcare industry has been recognized for decades. Nevertheless, the adoption and use of IT in healthcare has lagged behind other industries, motivating governments to take a role in supporting its use to achieve envisioned benefits. This dynamic has led to three major strands of research. Firstly, the relatively slow and uneven adoption of HIT, coupled with government programs intended to speed adoption, has raised the issue of who is adopting HIT, and the impact of public programs on rates of adoption and diffusion. Secondly, the realization of benefits from HIT appears to be occurring more slowly than its proponents had hoped, leading to an ongoing need to empirically measure the effect of its use on the quality and efficiency of healthcare as well as the contexts under which benefits are best realized. Thirdly, increases in the adoption and use of HIT have led to the potential for interoperable exchange of patient information and the dynamic use of that information to drive improvements in the healthcare delivery system; however, these applications require developing new approaches to overcoming barriers to collaboration between healthcare organizations and the HIT industry itself. Intertwined through each of these issues is the interaction between HIT as a tool for standardization and systemic change in the practice of healthcare, and healthcare professionals’ desire to preserve autonomy within the increasingly structured healthcare delivery system. Innovative approaches to improve the interactions between professionals, technology, and market forces are therefore necessary to capitalize on the promise of HIT and develop a continually learning health system.
Hendrik Schmitz and Svenja Winkler
The terms information and risk aversion play central roles in healthcare economics. While risk aversion is among the main reasons for the existence of health insurance, information asymmetries between insured individual and insurance company potentially lead to moral hazard or adverse selection. This has implications for the optimal design of health insurance contracts, but whether there is indeed moral hazard or adverse selection is ultimately an empirical question. Recently, there was even a debate whether the opposite of adverse selection—advantageous selection—prevails. Private information on risk aversion might weigh out information asymmetries regarding risk type and lead to more insurance coverage of healthy individuals (instead of less insurance coverage in adverse selection). Information and risk preferences are important not only in health insurance but more generally in health economics. For instance, they affect health behavior and, consequently, health outcomes. The degree of risk aversion, the ability to perceive risks, and the availability of information about risks partly explain why some individuals engage in unhealthy behavior while others refrain from smoking, drinking, or the like. Information has several dimensions. Apart from information on one’s personal health status, risk preferences, or health risks, consumer information on provider quality or health insurance supply is central in the economics of healthcare. Even though healthcare systems are necessarily highly regulated throughout the world, all systems at least allow for some market elements. These typically include the possibility of consumer choice, for instance, regarding health insurance coverage or choice of medical provider. An important question is whether consumer choice elements work in the healthcare sector—that is, whether consumers actually make rational or optimal decisions—and whether more information can improve decision quality.
Ching-to Albert Ma and Henry Y. Mak
Health services providers receive payments mostly from private or public insurers rather than patients. Provider incentive problems arise because an insurer misses information about the provider and patients, and has imperfect control over the provider’s treatment, quality, and cost decisions. Different provider payment systems, such as prospective payment, capitation, cost reimbursement, fee-for-service, and value-based payment, generate different treatment quality and cost incentives. The important issue is that a payment system implements an efficient quality-cost outcome if and only if it makes the provider internalize the social benefits and costs of services. Thus, the internalization principle can be used to evaluate payment systems across different settings. The most common payment systems are prospective payment, which pays a fixed price for service rendered, and cost reimbursement, which pays according to costs of service rendered. In a setting where the provider chooses health service quality and cost reduction effort, prospective payment satisfies the internalization principle but cost reimbursement does not. The reason is that prospective payment forces the provider to be responsible for cost, but cost reimbursement relieves the provider of the cost responsibility. Beyond this simple setting, the provider may select patients based on patients’ cost heterogeneity. Then neither prospective payment nor cost reimbursement achieves efficient quality and cost incentives. A mixed system that combines prospective payment and cost reimbursement performs better than each of its components alone. In general, the provider’s preferences and available strategies determine if a payment system may achieve internalization. If the provider is altruistic toward patients, prospective payment can be adjusted to accommodate altruism when the provider’s degree of altruism is known to the insurer. However, when the degree of altruism is unknown, even a mixed system may fail the internalization principle. Also, the internalization principle fails under prospective payment when the provider can upcode patient diagnoses for more favorable prices. Cost reimbursement attenuates the upcoding incentive. Finally, when the provider can choose many qualities, either prospective payment and cost reimbursement should be combined with the insurer’s disclosure on quality and cost information to satisfy the internalization principle. When good healthcare quality is interpreted as a good match between patients and treatments, payment design is to promote good matches. The internalization principle now requires the provider to bear benefits and costs of diagnosis effort and treatment choice. A mixed system may deliver efficient matching incentives. Payment systems necessarily interact with other incentive mechanisms such as patients’ reactions against the provider’s quality choice and other providers’ competitive strategies. Payment systems then become part of organizational incentives.
Stephen F. Diamond
Insider trading is not widely understood. Insiders of corporations can, in fact, buy and sell shares of those corporations. But, over time, Congress, the courts and the Securities and Exchange Commission (SEC) have imposed significant limits on such trading. The limits are not always clearly marked and the principles underlying them not always consistent. The core principle is that it is illegal to trade if one is in the possession of material, nonpublic information. But the rationality of this principle has been challenged by successive generations of law and economics scholars, most notably Manne, Easterbrook, Epstein, and Bainbridge. Their “economic” analysis of this contested area of the law provides, arguably, at least a more consistent basis upon which to decide when trades by insiders should, in fact, be disallowed. A return to genuine “first principles” generated by the nature of capitalism, however, allows for more powerful insights into the phenomenon and could lead to more effective regulation.
Matteo Lippi Bruni, Irene Mammi, and Rossella Verzulli
In developed countries, the role of public authorities as financing bodies and regulators of the long-term care sector is pervasive and calls for well-planned and informed policy actions. Poor quality in nursing homes has been a recurrent concern at least since the 1980s and has triggered a heated policy and scholarly debate. The economic literature on nursing home quality has thoroughly investigated the impact of regulatory interventions and of market characteristics on an array of input-, process-, and outcome-based quality measures. Most existing studies refer to the U.S. context, even though important insights can be drawn also from the smaller set of works that covers European countries. The major contribution of health economics to the empirical analysis of the nursing home industry is represented by the introduction of important methodological advances applying rigorous policy evaluation techniques with the purpose of properly identifying the causal effects of interest. In addition, the increased availability of rich datasets covering either process or outcome measures has allowed to investigate changes in nursing home quality properly accounting for its multidimensional features. The use of up-to-date econometric methods that, in most cases, exploit policy shocks and longitudinal data has given researchers the possibility to achieve a causal identification and an accurate quantification of the impact of a wide range of policy initiatives, including the introduction of nurse staffing thresholds, price regulation, and public reporting of quality indicators. This has helped to counteract part of the contradictory evidence highlighted by the strand of works based on more descriptive evidence. Possible lines for future research can be identified in further exploration of the consequences of policy interventions in terms of equity and accessibility to nursing home care.
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.