201-220 of 223 Results

Article

The recent “replication crisis” in the social sciences has led to increased attention on what statistically significant results entail. There are many reasons for why false positive results may be published in the scientific literature, such as low statistical power and “researcher degrees of freedom” in the analysis (where researchers when testing a hypothesis more or less actively seek to get results with p < .05). The results from three large replication projects in psychology, experimental economics, and the social sciences are discussed, with most of the focus on the last project where the statistical power in the replications was substantially higher than in the other projects. The results suggest that there is a substantial share of published results in top journals that do not replicate. While several replication indicators have been proposed, the main indicator for whether a results replicates or not is whether the replication study using the same statistical test finds a statistically significant effect (p < .05 in a two-sided test). For the project with very high statistical power the various replication indicators agree to a larger extent than for the other replication projects, and this is most likely due to the higher statistical power. While the replications discussed mainly are experiments, there are no reasons to believe that the replicability would be higher in other parts of economics and finance, if anything the opposite due to more researcher degrees of freedom. There is also a discussion of solutions to the often-observed low replicability, including lowering the p value threshold to .005 for statistical significance and increasing the use of preanalysis plans and registered reports for new studies as well as replications, followed by a discussion of measures of peer beliefs. Recent attempts to understand to what extent the academic community is aware of the limited reproducibility and can predict replication outcomes using prediction markets and surveys suggest that peer beliefs may be viewed as an additional reproducibility indicator.

Article

Stock-flow matching is a simple and elegant framework of dynamic trade in differentiated goods. Flows of entering traders match and exchange with the stocks of previously unsuccessful traders on the other side of the market. A buyer or seller who enters a market for a single, indivisible good such as a job or a home does not experience impediments to trade. All traders are fully informed about the available trading options; however, each of the available options in the stock on the other side of the market may or may not be suitable. If fortunate, this entering trader immediately finds a viable option in the stock of available opportunities and trade occurs straightaway. If unfortunate, none of the available opportunities suit the entrant. This buyer or seller now joins the stocks of unfulfilled traders who must wait for a new, suitable partner to enter. Three striking empirical regularities emerge from this microstructure. First, as the stock of buyers does not match with the stock of sellers, but with the flow of new sellers, the flow of new entrants becomes an important explanatory variable for aggregate trading rates. Second, the traders’ exit rates from the market are initially high, but if they fail to match quickly the exit rates become substantially slower. Third, these exit rates depend on different variables at different phases of an agent’s stay in the market. The probability that a new buyer will trade successfully depends only on the stock of sellers in the market. In contrast, the exit rate of an old buyer depends positively on the flow of new sellers, negatively on the stock of old buyers, and is independent of the stock of sellers. These three empirical relationships not only differ from those found in the familiar search literature but also conform to empirical evidence observed from unemployment outflows. Moreover, adopting the stock-flow approach enriches our understanding of output dynamics, employment flows, and aggregate economic performance. These trading mechanics generate endogenous price dispersion and price dynamics—prices depend on whether the buyer or the seller is the recent entrant, and on how many viable traders were waiting for the entrant, which varies over time. The stock-flow structure has provided insights about housing, temporary employment, and taxicab markets.

Article

Richard C. van Kleef, Thomas G. McGuire, Frederik T. Schut, and Wynand P. M. M. van de Ven

Many countries rely on social health insurance supplied by competing insurers to enhance fairness and efficiency in healthcare financing. Premiums in these settings are typically community rated per health plan. Though community rating can help achieve fairness objectives, it also leads to a variety of problems due to risk selection, that is, actions by consumers and insurers to exploit “unpriced risk” heterogeneity. From the viewpoint of a consumer, unpriced risk refers to the gap between her expected spending under a health plan and the net premium for that plan. Heterogeneity in unpriced risk can lead to selection by consumers in and out of insurance and between high- and low-value plans. These forms of risk selection can result in upward premium spirals, inefficient take-up of basic coverage, and inefficient sorting of consumers between high- and low-value plans. From the viewpoint of an insurer, unpriced risk refers to the gap between his expected costs under a certain contract and the revenues he receives for that contract. Heterogeneity in unpriced risk incentivizes insurers to alter their plan offerings in order to attract profitable people, resulting in inefficient plan design and possibly in the unavailability of high-quality care. Moreover, insurers have incentives to target profitable people via marketing tools and customer service, which—from a societal perspective—can be considered a waste of resources. Common tools to counteract selection problems are risk equalization, risk sharing, and risk rating of premiums. All three strategies reduce unpriced risk heterogeneity faced by insurers and thus diminish selection actions by insurers such as the altering of plan offerings. Risk rating of premiums also reduces unpriced risk heterogeneity faced by consumers and thus mitigates selection in and out of insurance and between high- and low-value plans. All three strategies, however, come with trade-offs. A smart blend takes advantage of the strengths, while reducing the weaknesses of each strategy. The optimal payment system configuration will depend on how a regulator weighs fairness and efficiency and on how the healthcare system is organized.

Article

Alessandro Casini and Pierre Perron

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Article

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.

Article

Occupations are a key characteristic for analyzing momentous changes in economy and society. Classical economists rooted their analyses in occupational divisions, emphasizing the division of work and its continuous evolution. Modern economists and economic historians also debate the wealth of nations by looking at the global changes in the labor force, at changing labor force participation rates, at winners and losers in the class structure, and in variations in this across the globe—stressing the importance of human capital for work and of changes therein for economic growth. To study such momentous changes over past centuries, historical occupational data are needed as well as measures and procedures to work with these data systematically and comparatively. The Historical International Standard Classification of Occupations (HISCO) maps occupational titles into a common coding scheme across the globe. HISCO-based measures of economic sector and economic specialization have been derived. To answer a number of interesting questions, the HISCO family has been extended to include HISCO-based measures of social status (HISCAM) and social classes (HISCLASS). Armed with his toolbox, scholars are able to study the development of the economy and society over past centuries.

Article

Most developed nations provide generous coverage of care services, using either a tax financed healthcare system or social health insurance. Such systems pursue efficiency and equity in care provision. Efficiency means that expenditures are minimized for a given level of care services. Equity means that individuals with equal needs have equal access to the benefit package. In order to limit expenditures, social health insurance systems explicitly limit their benefit package. Moreover, most such systems have introduced cost sharing so that beneficiaries bear some cost when using care services. These limits on coverage create room for private insurance that complements or supplements social health insurance. Everywhere, social health insurance coexists along with voluntarily purchased supplementary private insurance. While the latter generally covers a small portion of health expenditures, it can interfere with the functioning of social health insurance. Supplementary health insurance can be detrimental to efficiency through several mechanisms. It limits competition in managed competition settings. It favors excessive care consumption through coverage of cost sharing and of services that are complementary to those included in social insurance benefits. It can also hinder achievement of the equity goals inherent to social insurance. Supplementary insurance creates inequality in access to services included in the social benefits package. Individuals with high incomes are more likely to buy supplementary insurance, and the additional care consumption resulting from better coverage creates additional costs that are borne by social health insurance. In addition, there are other anti-redistributive mechanisms from high to low risks. Social health insurance should be designed, not as an isolated institution, but with an awareness of the existence—and the possible expansion—of supplementary health insurance.

Article

Albert A. Okunade and Ahmad Reshad Osmani

Healthcare cost encompasses expenditures on the totality of scarce resources (implicit and explicit) given up (or allocated) to produce healthcare goods (e.g., drugs and medical devices) and services (e.g., hospital care and physician office services are major components). Healthcare cost accounting components (sources and uses of funds) tend to differ but can be similar enough across most of the world countries. The healthcare cost concept usually differs for consumers, politicians and health policy decision-makers, health insurers, employers, and the government. All else given, inefficient healthcare production implies higher economic cost and lower productivity of the resources deployed in the process. Healthcare productivity varies across health systems of the world countries, the production technologies used, regulatory instruments, and institutional settings. Healthcare production often involves some specific (e.g., drugs and medical devices, information and communication technologies) or general technology for diagnosing, treating, or curing diseases in order to improve or restore human health conditions. In the last half century, the different healthcare systems of the world countries have undergone fundamental transformations in the structural designs, institutional regulations, and socio-economic and demographic dimensions. The nations have allocated a rising share of total economic resources or incomes (i.e., Gross National Product, or GDP) to the healthcare sector and are consequently enjoying substantial increases in population health status and life expectancies. There are complex and interacting linkages among escalating healthcare costs, longer life expectancies, technological progress (or “the march of science”), and sectoral productivities in the health services sectors of the advanced economies. Healthcare policy debates often concentrate on cost-containment strategies and search for improved efficient resource allocation and equitable distribution of the sector’s outputs. Consequently, this contribution is a broad review of the body of literature on technological progress, productivity, and cost: three important dimensions of the evolving modern healthcare systems. It provides a logical integration of three strands of work linking healthcare cost to technology and research evidence on sectoral productivity measurements. Finally, some important aspects of the existing study limitations are noted to motivate new research directions for future investigations to explore in the growing health sector economies.

Article

Sheelah Connolly

In the coming years, it is predicted that there will be a significant increase in the number of people living with dementia and consequently, the demand for health and social care services. Given the budget constraints facing health systems, it is anticipated that economic analysis will play an increasingly important role in informing decisions regarding the provision of services for people with dementia. However, compared with other conditions and diseases, research in dementia has been relatively limited. While in the past this may have been related to an assumption that dementia was a natural part of aging, there are features of dementia that make applying research methods particularly challenging. A number of economic methods have been applied to dementia, including cost-of-illness analysis and economic evaluation; however, methodological issues in this area persist. These include reaching a consensus on how best to measure and value informal care, how to capture the many impacts and costs of the condition as the disease progresses, and how to measure health outcomes. Addressing these existing methodological issues will help realize the potential of economic analysis in answering difficult questions around care for people with dementia.

Article

A recent body of literature on quantitative general equilibrium models links the creation and diffusion of knowledge and technology to openness to international trade and to the activity of multinational firms. The unifying theme of this literature is methodological: productivities are Fréchet random variables and arise from Poisson innovation and diffusion processes for ideas. The main advantage of this modeling strategy is that it delivers closed-form solutions for key endogenous variables that have a direct counterpart in the data (e.g., prices, trade flows). This tractability makes the connection between theory and data transparent, helps clarify the determinants of the gains from openness, and facilitates the calculation of counterfactual equilibria.

Article

The key question for the economics of international migration is whether observed real wage differentials across countries for workers with identical intrinsic productivity represent an economic inefficiency sustained by legal barriers to labor mobility between geographies. A simple comparison of the real wages of workers with the same level of formal schooling or performing similar occupations across countries shows massive gaps between rich and poorer countries. These gaps persist after adjusting for observed and unobserved human capital characteristics, suggesting a “place premium”—or space-specific wage differentials that are not due to intrinsic worker productivity but rather are due to a misallocation of labor. If wage gaps are not due to intrinsic worker productivity, then the incentive for workers to move to richer countries is high. The idea of a place premium is corroborated by macroeconomic evidence. National accounts data show large cross-country output per worker differences, driven by the divergence of total factor productivity. The lack of convergence in total factor productivity and corresponding spatial productivity differentials create differences in the marginal product of factors, and hence persistent gaps in the wages of equal productivity workers. These differentials can equalize with factor flows; however their persistence and large magnitude in the case of labor, suggest legal barriers to migration restricting labor flows are in fact constraining significant return on human capital, and leaving billions in unrealized gains to the world’s workers and global economy. A relaxation of these barriers would generate worker welfare gains that dwarf gold-standard poverty reduction programs.

Article

Used for hundreds of years and adapted to a variety of contexts, arbitration is a form of adjudicative dispute settlement where parties consent to selecting third-party neutrals that resolve a specific dispute by applying the applicable law to the facts. Part of arbitration’s success involves its flexibility in adapting procedures and selecting applicable law to meet parties’ unique needs, including having some control over the appointment of an arbitrator who may have unique substantive expertise. Parties may agree to arbitration hoping to avoid the time-consuming, expensive, and complex process of litigation by streamlining or tailoring dispute mechanics. Yet, it is not empirically verifiable that arbitration always saves time and costs, as assessing relative savings requires comparison to a national court and there are over 190 national judiciaries to which arbitration could be compared, as well as nonadjudicative forms of dispute resolution like direct negotiation and mediation. As parties inevitably negotiate in the “shadow of the law,” arbitration aids the assessment of conflict management options; and, particularly internationally, arbitration remains a powerful tool that incentivizes voluntary compliance with awards and streamlines enforcement. Despite the availability of many types of arbitration with different policy considerations, the parties’ consent to it and their agreement to arbitrate (including the applicable law) is the backbone of this form of dispute settlement. Arbitration agreements require parties to make core choices, such as deciding on the scope of agreements submitted to arbitration, the legal place of arbitration, and applicable rules. Such an agreement then provides the framework for fundamental elements of the proceedings, namely, the basis of the tribunal’s jurisdiction and power over the dispute, the standards for appointing arbitrators, the structure and rules of the proceedings, and the content and form of derivative awards. Having a valid arbitration agreement (and an arbitration proceeding conducted in accordance with those legal obligations) also influences whether courts at the place of arbitration will set the award aside and whether courts at a place of enforcement will recognize and enforce an arbitration award. In the modern era, arbitration will continue evolving to address concerns about local policy considerations (particularly in national arbitration), confidentiality and ethics, technology and cybersecurity, diversity and inclusion, and to ensure arbitration is an ongoing value proposition.

Article

Italy played a central role in the Euro-Mediterranean economy during Antiquity, the late Middle Ages, and the Renaissance. Until the end of the 16th century, the Italian economy was relatively advanced compared with those of the Western European and Mediterranean countries. From the 17th century until the end of the 19th, GDP rose as the population increased. Yet per capita income slowly diminished together with real wages, urbanization, and living standards. Italy lost its central position in the Euro-Mediterranean world and, until the end of the 19th century, was a relatively backward area on the periphery of the most dynamic countries in the north and center of Europe. The Italian premodern economy represents a classic example of extensive growth or GDP growth without improvement in per capita income and living standards.

Article

Law and economics has proved a particularly fruitful scholarly approach in the field of mergers and acquisitions. A huge law and economics literature has developed, providing critical insights into merger activity in general and the proper role of corporate and securities law in regulating this activity. Early economic research examined the motivations for merger activity and the antitrust implications of mergers. Later scholarship elucidated the important disciplining effects on management from merger activity and the market for corporate control. If management performs poorly, causing a firm to become undervalued relative to a well-managed firm, the firm becomes vulnerable to a takeover where management will be replaced. This prospect provides a powerful incentive for management to perform well. More recent work has revealed the limitations of market discipline on management actions in the merger context, and the corresponding role of corporate law in protecting stockholders. Because a merger is generally the final interaction between management and the other stakeholders in a firm, the typical constraints and mechanisms of accountability that otherwise constrain managerial opportunism may be rendered ineffective. This work has played a central role in informing modern jurisprudence. It has shaped the application of enhanced judicial scrutiny of management actions in the merger context, as embodied in the landmark Delaware cases Unocal and Revlon. The law and economics literature has also made important contribution to more recent developments in stockholder appraisal. The law and economics tradition has also provided a useful framework for evaluating the dynamics of merger litigation, including stockholder appraisal, and the extent to which such litigation can be made to serve a useful role in corporate governance.

Article

The Ottoman Empire stood at the crossroads of intercontinental trade for six centuries until World War I. For most of its existence, the economic institutions and policies of this agrarian empire were shaped according to the distribution of political power, cooperation, conflicts, and struggles between the state elites and the various other elites, including those in the provinces. The central bureaucracy managed to contain the many challenges it faced with its pragmatism and habit of negotiation to co-opt and incorporate into the state the social groups that rebelled against it. As long as the activities of the economic elites, landowners, merchants, the leading artisans, and the moneylenders contributed to the perpetuation of this social order, the state encouraged and supported them but did not welcome their rapid enrichment. The influence of these elites over economic matters, and more generally over the policies of the central government, remained limited. Cooperation and coordination among the provincial elites was also made more difficult by the fact that the empire covered a large geographical area, and the different ethnic groups and their elites did not always act together. Differences in government policies and the institutional environment between Western Europe and the Middle East remained limited until the early modern era. With the rise of the Atlantic trade, however, the merchants in northwestern European countries increased their economic and political power substantially. They were then able to induce their governments to defend and develop their commercial interests in the Middle East more forcefully. As they began to lag behind the European merchants even in their own region, it became even more difficult for the Ottoman merchants to provide input into their government’s trade policies or change the commercial or economic institutions in the direction they preferred. Key economic institutions of the traditional Ottoman order, such as state ownership of land, urban guilds, and selective interventionism, remained mostly intact until 1820. In the early part of the 19th century, the center, supported by the new technologies, embarked on an ambitious reform program and was able to reassert its power over the provinces. Centralization and reforms were accompanied by the opening of the economy to international trade and investment. Economic policies and institutional changes in the Ottoman Empire began to reflect the growing power of European states and companies during the 19th century.

Article

Thomas J. Kniesner and W. Kip Viscusi

The value of a statistical life (VSL) is the local tradeoff rate between fatality risk and money. When the tradeoff values are derived from choices in market contexts the VSL serves as both a measure of the population’s willingness to pay for risk reduction and the marginal cost of enhancing safety. Given its fundamental economic role, policy analysts have adopted the VSL as the economically correct measure of the benefit individuals receive from enhancements to their health and safety. Estimates of the VSL for the United States are around $10 million ($2017), and estimates for other countries are generally lower given the positive income elasticity of the VSL. Because of the prominence of mortality risk reductions as the justification for government policies the VSL is a crucial component of the benefit-cost analyses that are part of the regulatory process in the United States and other countries. The VSL is also foundationally related to the concepts of value of a statistical life year (VSLY) and value of a statistical injury (VSI), which also permeate the labor and health economics literatures. Thus, the same types of valuation approaches can be used to monetize non-fatal injuries and mortality risks that pose very small effects on life expectancy. In addition to formalizing the concept and measurement of the VSL and presenting representative estimates for the United States and other countries our Encyclopedia selection addresses the most important questions concerning the nuances that are of interest to researchers and policymakers.

Article

High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows: (*) x i t = α i u t + β i u t − 1 + ξ i t , i = 1 , … , n , t = 1 , … , T , where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common shock, and by the idiosyncratic component ξ i t . The common shock u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock u t affect the variable x i t dynamically, that is, through the lag polynomial α i + β i L . Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both n and T tending to infinity. The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation ( ∗ ) becomes x i t = α i F 1 t + β i F 2 t + ξ i t , F 1 t = u t , F 2 t = u t − 1 . Instead of the dynamic equation ( ∗ ) there is now a static equation, while instead of the white noise u t there are now two factors, also called static factors, which are dynamically linked: F 1 t = u t , F 2 t = F 1, t − 1 . This transformation into a static representation, whose general form is x i t = λ i 1 F 1 t + ⋯ + λ i r F r t + ξ i t , is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors F j t and the loadings λ i j can be consistently estimated from the principal components of the observable variables x i t . Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.

Article

Marjon van der Pol and Alastair Irvine

The interest in eliciting time preferences for health has increased rapidly since the early 1990s. It has two main sources: a concern over the appropriate methods for taking timing into account in economics evaluations, and a desire to obtain a better understanding of individual health and healthcare behaviors. The literature on empirical time preferences for health has developed innovative elicitation methods in response to specific challenges that are due to the special nature of health. The health domain has also shown a willingness to explore a wider range of underlying models compared to the monetary domain. Consideration of time preferences for health raises a number of questions. Are time preferences for health similar to those for money? What are the additional challenges when measuring time preferences for health? How do individuals in time preference for health experiments make decisions? Is it possible or necessary to incentivize time preference for health experiments?

Article

Mostafa Beshkar and Eric Bond

International trade agreements have played a significant role in the reduction of trade barriers that has taken place since the end of World War II. One objective of the theoretical literature on trade agreements is to address the question of why bilateral and multilateral trade agreements, rather than simple unilateral actions by individual countries, have been required to reduce trade barriers. The predominant explanation has been the terms of trade theory, which argues that unilateral tariff policies lead to a prisoner’s dilemma due to the negative effect of a country’s tariffs on its trading partners. Reciprocal tariff reductions through a trade agreement are required to obtain tariff reductions that improve on the noncooperative equilibrium. An alternative explanation, the commitment theory of trade agreements, focuses on the use of external enforcement under a trade agreement to discipline domestic politics. A second objective of the theoretical literature has been to understand the design of trade agreements. Insights from contract theory are used to study various flexibility mechanisms that are embodied in trade agreements. These mechanisms include contingent protection measures such as safeguards and antidumping, and unilateral flexibility through tariff overhang. The literature also addresses the enforcement of agreements in the absence of an external enforcement mechanism. The theories of the dispute settlement process of the WTO portray it as an institution with an informational role that facilitates the coordination among parties with incomplete information about the states of the world and the nature of the actions taken by each signatory. Finally, the literature examines whether the ability to form preferential trade agreements serves as a stumbling block or a building block to multilateral liberalization.

Article

Hengjie Ai, Murray Z. Frank, and Ali Sanati

The trade-off theory of capital structure says that corporate leverage is determined by balancing the tax-saving benefits of debt against dead-weight costs of bankruptcy. The theory was developed in the early 1970s and despite a number of important challenges, it remains the dominant theory of corporate capital structure. The theory predicts that corporate debt will increase in the risk-free interest rate and if the tax code allows more generous interest rate tax deductions. Debt is decreasing in the deadweight losses in a bankruptcy. The equilibrium price of debt is decreasing in the tax benefits and increasing in the risk-free interest rate. Dynamic trade-off models can be broadly divided into two categories: models that build capital structure into a real options framework with exogenous investments and models with endogeneous investment. These models are relatively flexible, and are generally able to match a range of firm decisions and features of the data, which include the typical leverage ratios of real firms and related data moments. The literature has essentially resolved empirical challenges to the theory based on the low leverage puzzle, profits-leverage puzzle, and speed of target adjustment. As predicted, interest rates and market conditions matter for leverage. There is some evidence of the predicted tax rate and bankruptcy code effects, but it remains challenging to establish tight causal links. Overall, the theory provides a reasonable basis on which to build understanding of capital structure.