301-310 of 310 Results

Article

Mostafa Beshkar and Eric Bond

International trade agreements have played a significant role in the reduction of trade barriers that has taken place since the end of World War II. One objective of the theoretical literature on trade agreements is to address the question of why bilateral and multilateral trade agreements, rather than simple unilateral actions by individual countries, have been required to reduce trade barriers. The predominant explanation has been the terms of trade theory, which argues that unilateral tariff policies lead to a prisoner’s dilemma due to the negative effect of a country’s tariffs on its trading partners. Reciprocal tariff reductions through a trade agreement are required to obtain tariff reductions that improve on the noncooperative equilibrium. An alternative explanation, the commitment theory of trade agreements, focuses on the use of external enforcement under a trade agreement to discipline domestic politics. A second objective of the theoretical literature has been to understand the design of trade agreements. Insights from contract theory are used to study various flexibility mechanisms that are embodied in trade agreements. These mechanisms include contingent protection measures such as safeguards and antidumping, and unilateral flexibility through tariff overhang. The literature also addresses the enforcement of agreements in the absence of an external enforcement mechanism. The theories of the dispute settlement process of the WTO portray it as an institution with an informational role that facilitates the coordination among parties with incomplete information about the states of the world and the nature of the actions taken by each signatory. Finally, the literature examines whether the ability to form preferential trade agreements serves as a stumbling block or a building block to multilateral liberalization.

Article

International transactions are riskier than domestic transactions for several reasons, including, but not limited to, geographical distance, longer shipping times, greater informational frictions, contract enforcement, and dispute resolution problems. Such risks stem, fundamentally, from a timing mismatch between payment and delivery in business transactions. Trade finance plays a critical role in bridging the gap, thereby overcoming greater risks inherent in international trade. It is thus even described as the lifeline of international trade, because more than 90% of international transactions involve some form of credit, insurance, or guarantee. Despite its importance in international trade, however, it was not until the great trade collapse in 2008–2009 that trade finance came to the attention of academic researchers. An emerging literature on trade finance has contributed to providing answers to questions such as: Who is responsible for financing transactions, and, hence, who would need liquidity support most to sustain international trade? This is particularly relevant in developing countries, where the lack of trade finance is often identified as the main hindrance to trade, and in times of financial crisis, when the overall drying up of trade finance could lead to a global collapse in trade.

Article

Hengjie Ai, Murray Z. Frank, and Ali Sanati

The trade-off theory of capital structure says that corporate leverage is determined by balancing the tax-saving benefits of debt against dead-weight costs of bankruptcy. The theory was developed in the early 1970s and despite a number of important challenges, it remains the dominant theory of corporate capital structure. The theory predicts that corporate debt will increase in the risk-free interest rate and if the tax code allows more generous interest rate tax deductions. Debt is decreasing in the deadweight losses in a bankruptcy. The equilibrium price of debt is decreasing in the tax benefits and increasing in the risk-free interest rate. Dynamic trade-off models can be broadly divided into two categories: models that build capital structure into a real options framework with exogenous investments and models with endogeneous investment. These models are relatively flexible, and are generally able to match a range of firm decisions and features of the data, which include the typical leverage ratios of real firms and related data moments. The literature has essentially resolved empirical challenges to the theory based on the low leverage puzzle, profits-leverage puzzle, and speed of target adjustment. As predicted, interest rates and market conditions matter for leverage. There is some evidence of the predicted tax rate and bankruptcy code effects, but it remains challenging to establish tight causal links. Overall, the theory provides a reasonable basis on which to build understanding of capital structure.

Article

When international trade increases, either because of a country’s lowering its trade barriers, a trade agreement, or productivity surges in a trade partner, the surge of imports can cause dislocation and lowered incomes for workers in the import-competing industry or the surrounding local economy. Trade economists long used static approaches to analyze these effects on workers, assuming either that workers can adjust instantly and costlessly, or (less often) that they cannot adjust at all. In practice, however, workers incur costs to adjust, and the adjustment takes time. An explosion of research, mostly since about 2008, has explored dynamic worker adjustment through change of industry, change of occupation, change of location, change of labor-force participation, adjustment to change in income, and change in marital status or family structure. Some of these studies estimate rich structural models of worker behavior, allowing for such factors as sector-specific or occupation-specific human capital to accrue over time, which can be imperfectly transferable across industries or occupations. Some allow for unobserved heterogeneity across workers, which creates substantial technical challenges. Some allow for life-cycle effects, where adjustment costs vary with age, and others allow adjustment costs to vary by gender. Others simplify the worker’s problem to embed it in a rich general equilibrium framework. Some key results include: (a) Switching either industry or occupation tends to be very costly; usually more than a year’s average wages on average. (b) Given that moving costs change over time and workers are able to time their moves, realized costs are much lower, but the result is gradual adjustment, with a move to a new steady state that typically takes several years. (c) Idiosyncratic shocks to moving costs are quantitatively important, so that otherwise-identical workers often are seen moving in opposite directions at the same time. These shocks create a large role for option value, so that even if real wages in an industry are permanently lowered by a trade shock, a worker initially in that industry can benefit. This softens or reverses estimates of worker losses from, for example, the China shock. (d) Switching costs vary greatly by occupation, and can be very different for blue-collar and white-collar workers, for young and old workers, and for men and women. (e) Simple theories suggest that a shock results in wage overshooting, where the gap in wages between highly affected industries and others opens up and then shrinks over time, but evidence from Brazil shows that at least in some cases the wage differentials widen over time. (f) Some workers adjust through family changes. Evidence from Denmark shows that some women workers hit by import shocks withdraw from the labor market at least temporarily to marry and have children, unlike men. Promising directions at the frontier include more work on longitudinal data; the role of capital adjustment; savings, risk aversion and the adjustment of trade deficits; responses in educational attainment; and much more exploration of the effects on family.

Article

Gilles Duranton and Anthony J. Venables

Urbanization is a central challenge of our times. At its core, it is an urban development challenge that requires addressing transportation and housing in cities. Transport improvements can reduce travel times and improve the spatial reach of urban dwellers. But these improvements may be crowded out by latent demand for travel and may lead to worse congestion, pollution, and other negative externalities associated with urban traffic. To evaluate the effects of transport improvements, direct travel effects must be measured. Then, an improvement in traffic conditions somewhere may spill over to other areas. Firms and residents may also relocate, so economic growth close to a transport improvement may just result from a displacement of economic activity from other areas. Conversely, better accessibility is expected to foster agglomeration effects and increase productivity. Valuing these changes is difficult, as it requires being able to quantify many externalities such as congestion delays, scheduling gains, and greater job accessibility. Housing policies present different challenges. More fundamental policies seek to enable housing construction by offering more secure property rights, up-to-date land registries, and competent land-use planning—all complex endeavors and all necessary. Other housing policies rely on heavy government interventions to provide housing directly to large segments of the urban population. These policies often flop because governments fail to link housing provision with job accessibility and appropriate land-use planning. Housing is also an expensive asset that requires significant initial funding, while credit constraints abound in the urbanizing world. Policymakers also need to choose between small improvements to extremely low-quality informal housing, retrofitting modern housing in already-built urban areas, or urban expansion. All these options involve sharp trade-offs, subtle induced effects, and complex interactions with transport. All these effects are difficult to measure and challenging to value.

Article

Urban sprawl in popular sources is vaguely defined and largely misunderstood, having acquired a pejorative meaning. Economists should ask whether particular patterns of urban land use are an outcome of an efficient allocation of resources. Theoretical economic modeling has been used to show that more not less, sprawl often improves economic efficiency. More sprawl can cause a reduction in traffic congestion. Job suburbanization can generally increase sprawl but improves economic efficiency. Limiting sprawl in some cities by direct control of the land use can increase sprawl in other cities, and aggregate sprawl in all cities combined can increase. That urban population growth causes more urban sprawl is verified by empirically implemented general equilibrium models, but—contrary to common belief—the increase in travel times that accompanies such sprawl are very modest. Urban growth boundaries to limit urban sprawl cause large deadweight losses by raising land prices and should be seen to be socially intolerable but often are not. It is good policy to use corrective taxation for negative externalities such as traffic congestion and to implement property tax reforms to reduce or eliminate distortive taxation. Under various circumstances such fiscal measures improve welfare by increasing urban sprawl. The flight of the rich from American central cities, large lot zoning in the suburbs, and the financing of schools by property tax revenues are seen as causes of sprawl. There is also evidence that more heterogeneity among consumers and more unequal income distributions cause more urban sprawl. The connections between agglomeration economies and urban sprawl are less clear. The emerging technology of autonomous vehicles can have major implications for the future of urban spatial structure and is likely to add to sprawl.

Article

Henrik Andersson, Arne Risa Hole, and Mikael Svensson

Many public policies and individual actions have consequences for population health. To understand whether a (costly) policy undertaken to improve population health is a wise use of resources, analysts can use economic evaluation methods to assess the costs and benefits. To do this, it is necessary to evaluate the costs and benefits using the same metric, and for convenience, a monetary measure is commonly used. It is well established that money measures of a reduction in health risks can be theoretically derived using the willingness-to-pay concept. However, because a market price for health risks is not available, analysts have to rely on analytical techniques to estimate the willingness to pay using revealed- or stated-preference methods. Revealed-preference methods infer willingness to pay based on individuals’ actual behavior in markets related to health risks, and they include such approaches as hedonic pricing techniques. Stated-preference methods use a hypothetical market scenario in which respondents make trade-offs between wealth and health risks. Using, for example, a random utility framework, it is possible to directly estimate individuals’ willingness to pay by analyzing the trade-offs they make in the hypothetical scenario. Stated-preference methods are commonly applied using contingent valuation or discrete choice experiment techniques. Despite criticism and the shortcomings of both the revealed- and stated-preference methods, substantial progress has been made since the 1990s in using both approaches to estimate the willingness to pay for health-risk reductions.

Article

Since the turn of the 21st century, an abundant body of research has demonstrated that teachers meaningfully contribute to their students’ learning but that teachers vary widely in their effectiveness. Measures of teachers’ “value added” to student achievement have become common, and sometimes controversial, tools for researchers and policymakers hoping to identify and differentiate teachers’ individual contributions to student learning. Value-added measures aim to identify how much more a given teacher’s students learn than what would be expected based on how much other, similar students learn with other teachers. The question of how to measure value added without substantial measurement error and without incorrectly capturing other factors outside of teachers’ control is complex and sometime illusory, and the advantages and drawbacks to any particular method of estimating teachers’ value added depend on the specific context and purpose for their use. Traditionally, researchers have calculated value-added scores only for the subset of teachers with students in tested grades and subjects—a relatively small proportion of the teaching force, in a narrow set of the many domains on which teachers may influence their students. More recently, researchers have created value-added estimates for a range of other student outcomes, including measures of students’ engagement and social-emotional learning such as attendance and behavioral incidences, which may be available for more teachers. Overall, teacher value-added measures can be useful tools for understanding and improving teaching and learning, but they have substantial limitations for many uses and contexts.

Article

Marisa Miraldo, Katharina Hauck, Antoine Vernet, and Ana Wheelock

Major medical innovations have greatly increased the efficacy of treatments, improved patient outcomes, and often reduced the cost of medical care. However, innovations do not diffuse uniformly across and within health systems. Due to the high complexity of medical treatment decisions, variations in clinical practice are inherent to healthcare delivery, regardless of technological advances, new ways of working, funding, and burden of disease. In this article we conduct a narrative literature review to identify and discuss peer-reviewed articles presenting a theoretical framework or empirical evidence of the factors associated with the adoption of innovation and clinical practice. We find that variation in innovation adoption and medical practice is associated with multiple factors. First, patients’ characteristics, including medical needs and genetic factors, can crucially affect clinical outcomes and the efficacy of treatments. Moreover, differences in patients’ preferences can be an important source of variation. Medical treatments may need to take such patient characteristics into account if they are to deliver optimal outcomes, and consequently, resulting practice variations should be considered warranted and in the best interests of patients. However, socioeconomic or demographic characteristics, such as ethnicity, income, or gender are often not considered legitimate grounds for differential treatment. Second, physician characteristics—such as socioeconomic profile, training, and work-related characteristics—are equally an influential component of practice variation. In particular, so-called “practice style” and physicians’ attitudes toward risk and innovation adoption are considered a major source of practice variation, but have proven difficult to investigate empirically. Lastly, features of healthcare systems—notably, public coverage of healthcare expenditure, cost-based reimbursement of providers, and service-delivery organization, are generally associated with higher utilization rates and adoption of innovation. Research shows some successful strategies aimed at reducing variation in medical decision-making, such as the use of decision aids, data feedback, benchmarking, clinical practice guidelines, blinded report cards, and pay for performance. But despite these advances, there is uneven diffusion of new technologies and procedures, with potentially severe adverse efficiency and equity implications.

Article

David Wolf and H. Allen Klaiber

The value of a differentiated product is simply the sum of its parts. This concept is easily observed in housing markets where the price of a home is determined by the underlying bundle of attributes that define it and by the price households are willing to pay for each attribute. These prices are referred to as implicit prices because their value is indirectly revealed through the price of another product (typically a home) and are of interest as they reveal the value of goods, such as nearby public amenities, that would otherwise remain unknown. This concept was first formalized into a tractable theoretical framework by Rosen, and is known as the hedonic pricing method. The two-stage hedonic method requires the researcher to map housing attributes into housing price using an equilibrium price function. Information recovered from the first stage is then used to recover inverse demand functions for nonmarket goods in the second stage, which are required for nonmarginal welfare evaluation. Researchers have rarely implemented the second stage, however, due to limited data availability, specification concerns, and the inability to correct for simultaneity bias between price and quality. As policies increasingly seek to deliver large, nonmarginal changes in public goods, the need to estimate the hedonic second stage is becoming more poignant. Greater effort therefore needs to be made to establish a set of best practices within the second stage, many of which can be developed using methods established in the extensive first-stage literature.