Local governance is a key focal point for achieving the United Nations Sustainable Development Goals (SDGs). National and global initiatives encourage SDG governance by promoting the overall SDG framework, targets, and indicators and by providing data, rankings, and visualization about the performance of nations, states, and selected cities. Soon after the SDGs were adopted in 2015, efforts turned toward localization—that is, a focus on local governance as the engine for progress and innovation, which engendered many efforts to develop indicators to measure sustainability. In addition to this emphasis on measurement strategies, the use of the SDGs as a holistic and integrated framework that is essential for improvement, implementation, and innovation began to emerge. Despite challenges to SDG-based local governance, promising strategies that exemplify “SDG 360 Thinking” have emerged. These approaches reflect practical insights related to political incentives, local relevance, and simplicity or feasibility. They address key aspects of the planning and implementation cycle and echo evidence-based approaches deriving from systems thinking and implementation science. SDG 360 Thinking uses a holistic systematic approach to focus on identification of co-benefits; reduction of harm, waste, and error; and equity trade-offs. The clarity of purpose, systematic approach, and revelatory power of SDG 360 Thinking, combined with a practical, inclusive, and robust economics, offer the promise to enable local governments to realize the potential of the SDGs.
361-379 of 379 Results
Article
360 Thinking in Local Governance Advances Sustainability, Economic Prosperity, and Equity
Lori DiPrete Brown
Article
Time Consistent Policies and Quasi-Hyperbolic Discounting
Łukasz Balbus, Kevin Reffett, and Łukasz Woźny
In dynamic choice models, dynamic inconsistency of preferences is a situation in which a decision-maker’s preferences change over time. Optimal plans under such preferences are time inconsistent if a decision-maker has no incentive to follow in the future the (previously chosen) optimal plan. A typical example of dynamic inconsistency is the case of present bias preferences, where there is a repeated preference toward smaller present rewards versus larger future rewards.
The study of dynamic choice of decision-makers who possess dynamically inconsistent preferences has long been the focal point of much work in behavioral economics. Experimental and empirical literatures both point to the importance of various forms of present-bias. The canonical model of dynamically inconsistent preferences exhibiting present-bias is a model of quasi-hyperbolic discounting. A quasi-hyperbolic discounting model is a dynamic choice model, in which the standard exponential discounting is modified by adding an impatience parameter that additionally discounts the immediately succeeding period.
A central problem with the analytical study of decision-makers who possess dynamically inconsistent preferences is how to model their choices in sequential decision problems. One general answer to this problem is to characterize and compute (if they exist) constrained optimal plans that are optimal among the set of time consistent sequential plans. Time consistent plans are those among the set of feasible plans that will actually be followed, or not reoptimized, by agents whose preferences change over time. These are called time consistent plans or policies (TCPs).
Many results of the existence, uniqueness, and characterization of stationary, or time invariant, TCPs in a class of consumption-savings problems with quasi-hyperbolic discounting, as well as provide some discussion of how to compute TCPs in some extensions of the model are presented, and the role of the generalized Bellman equation operator approach is central. This approach provides sufficient conditions for the existence of time consistent solutions and facilitates their computation.
Importantly, the generalized Bellman approach can also be related to a common first-order approach in the literature known as the generalized Euler equation approach. By constructing sufficient conditions for continuously differentiable TCPs on the primitives of the model, sufficient conditions under which a generalized Euler equation approach is valid can be provided.
There are other important facets of TCP, including sufficient conditions for the existence of monotone comparative statics in interesting parameters of the decision environment, as well as generalizations of the generalized Bellman approach to allow for unbounded returns and general certainty equivalents. In addition, the case of multidimensional state space, as well as a general self generation method for characterizing nonstationary TCPs must be considered as well.
Article
Time-Domain Approach in High-Dimensional Dynamic Factor Models
Marco Lippi
High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows:
(*)
x
i
t
=
α
i
u
t
+
β
i
u
t
−
1
+
ξ
i
t
,
i
=
1
,
…
,
n
,
t
=
1
,
…
,
T
,
where the observable variables
x
i
t
are driven by the white noise
u
t
, which is common to all the variables, the common shock, and by the idiosyncratic component
ξ
i
t
. The common shock
u
t
is orthogonal to the idiosyncratic components
ξ
i
t
, the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock
u
t
affect the variable
x
i
t
dynamically, that is, through the lag polynomial
α
i
+
β
i
L
. Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both
n
and
T
tending to infinity.
The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation (
∗
) becomes
x
i
t
=
α
i
F
1
t
+
β
i
F
2
t
+
ξ
i
t
,
F
1
t
=
u
t
,
F
2
t
=
u
t
−
1
.
Instead of the dynamic equation (
∗
) there is now a static equation, while instead of the white noise
u
t
there are now two factors, also called static factors, which are dynamically linked:
F
1
t
=
u
t
,
F
2
t
=
F
1,
t
−
1
.
This transformation into a static representation, whose general form is
x
i
t
=
λ
i
1
F
1
t
+
⋯
+
λ
i
r
F
r
t
+
ξ
i
t
,
is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors
F
j
t
and the loadings
λ
i
j
can be consistently estimated from the principal components of the observable variables
x
i
t
.
Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.
Article
Time Preferences for Health
Marjon van der Pol and Alastair Irvine
The interest in eliciting time preferences for health has increased rapidly since the early 1990s. It has two main sources: a concern over the appropriate methods for taking timing into account in economics evaluations, and a desire to obtain a better understanding of individual health and healthcare behaviors. The literature on empirical time preferences for health has developed innovative elicitation methods in response to specific challenges that are due to the special nature of health. The health domain has also shown a willingness to explore a wider range of underlying models compared to the monetary domain. Consideration of time preferences for health raises a number of questions. Are time preferences for health similar to those for money? What are the additional challenges when measuring time preferences for health? How do individuals in time preference for health experiments make decisions? Is it possible or necessary to incentivize time preference for health experiments?
Article
Trade Agreements: Theoretical Foundations
Mostafa Beshkar and Eric Bond
International trade agreements have played a significant role in the reduction of trade barriers that has taken place since the end of World War II. One objective of the theoretical literature on trade agreements is to address the question of why bilateral and multilateral trade agreements, rather than simple unilateral actions by individual countries, have been required to reduce trade barriers. The predominant explanation has been the terms of trade theory, which argues that unilateral tariff policies lead to a prisoner’s dilemma due to the negative effect of a country’s tariffs on its trading partners. Reciprocal tariff reductions through a trade agreement are required to obtain tariff reductions that improve on the noncooperative equilibrium. An alternative explanation, the commitment theory of trade agreements, focuses on the use of external enforcement under a trade agreement to discipline domestic politics.
A second objective of the theoretical literature has been to understand the design of trade agreements. Insights from contract theory are used to study various flexibility mechanisms that are embodied in trade agreements. These mechanisms include contingent protection measures such as safeguards and antidumping, and unilateral flexibility through tariff overhang. The literature also addresses the enforcement of agreements in the absence of an external enforcement mechanism. The theories of the dispute settlement process of the WTO portray it as an institution with an informational role that facilitates the coordination among parties with incomplete information about the states of the world and the nature of the actions taken by each signatory. Finally, the literature examines whether the ability to form preferential trade agreements serves as a stumbling block or a building block to multilateral liberalization.
Article
Trade Finance and Payment Methods in International Trade
JaeBin Ahn
International transactions are riskier than domestic transactions for several reasons, including, but not limited to, geographical distance, longer shipping times, greater informational frictions, contract enforcement, and dispute resolution problems. Such risks stem, fundamentally, from a timing mismatch between payment and delivery in business transactions. Trade finance plays a critical role in bridging the gap, thereby overcoming greater risks inherent in international trade. It is thus even described as the lifeline of international trade, because more than 90% of international transactions involve some form of credit, insurance, or guarantee. Despite its importance in international trade, however, it was not until the great trade collapse in 2008–2009 that trade finance came to the attention of academic researchers.
An emerging literature on trade finance has contributed to providing answers to questions such as: Who is responsible for financing transactions, and, hence, who would need liquidity support most to sustain international trade? This is particularly relevant in developing countries, where the lack of trade finance is often identified as the main hindrance to trade, and in times of financial crisis, when the overall drying up of trade finance could lead to a global collapse in trade.
Article
Trade Liberalization and Informal Labor Markets
Lourenço S. Paz and Jennifer P. Poole
In recent decades, economic reforms and technological advances have profoundly altered the way employers do business—for instance, the nature of employment relationships, the skills firms demand, and the goods and services they produce and export. In many developing economies, these changes took place concurrently with a substantive rise in work outside of the formal economy. According to International Labour Organization estimates, informal employment can be as high as 88% of total employment in India, almost 50% in Brazil, and around 35% of employment in South Africa. Such informal employment is typically associated with lower wages, lower productivity, poorer working conditions, weaker employment protections, and fewer job benefits and amenities, and these informal workers are often poorer and more vulnerable than their counterparts in the formalized economy. Informal jobs are a consequence of labor market policies—like severance payments or social security contributions—that make the noncompliant informal job cheaper for the employer than a compliant formal job. Each model has a different benefit (or lack of punishment) for employing formal workers, and a distinct mechanism through which international trade shocks alter the benefit-cost of these types of jobs, which in turn results in a change in the informality share. The empirical literature concerning international trade and formality largely points to an unambiguous increase in informal employment in the aftermath of increased import competition. Interestingly, increased access to foreign markets, via liberalization of major trading partners, offers strongly positive implications for formal employment opportunities, decreasing informality. Such effects are moderated by the de facto enforcement of labor regulations. Expansions toward the formal economy and away from informal wage employment in the aftermath of increased access to foreign markets are smaller in strictly enforced areas of the country.
Article
The Trade-Off Theory of Corporate Capital Structure
Hengjie Ai, Murray Z. Frank, and Ali Sanati
The trade-off theory of capital structure says that corporate leverage is determined by balancing the tax-saving benefits of debt against dead-weight costs of bankruptcy. The theory was developed in the early 1970s and despite a number of important challenges, it remains the dominant theory of corporate capital structure.
The theory predicts that corporate debt will increase in the risk-free interest rate and if the tax code allows more generous interest rate tax deductions. Debt is decreasing in the deadweight losses in a bankruptcy. The equilibrium price of debt is decreasing in the tax benefits and increasing in the risk-free interest rate.
Dynamic trade-off models can be broadly divided into two categories: models that build capital structure into a real options framework with exogenous investments and models with endogeneous investment. These models are relatively flexible, and are generally able to match a range of firm decisions and features of the data, which include the typical leverage ratios of real firms and related data moments.
The literature has essentially resolved empirical challenges to the theory based on the low leverage puzzle, profits-leverage puzzle, and speed of target adjustment. As predicted, interest rates and market conditions matter for leverage. There is some evidence of the predicted tax rate and bankruptcy code effects, but it remains challenging to establish tight causal links.
Overall, the theory provides a reasonable basis on which to build understanding of capital structure.
Article
Trade Shocks and Labor-Market Adjustment
John McLaren
When international trade increases, either because of a country’s lowering its trade barriers, a trade agreement, or productivity surges in a trade partner, the surge of imports can cause dislocation and lowered incomes for workers in the import-competing industry or the surrounding local economy. Trade economists long used static approaches to analyze these effects on workers, assuming either that workers can adjust instantly and costlessly, or (less often) that they cannot adjust at all. In practice, however, workers incur costs to adjust, and the adjustment takes time. An explosion of research, mostly since about 2008, has explored dynamic worker adjustment through change of industry, change of occupation, change of location, change of labor-force participation, adjustment to change in income, and change in marital status or family structure.
Some of these studies estimate rich structural models of worker behavior, allowing for such factors as sector-specific or occupation-specific human capital to accrue over time, which can be imperfectly transferable across industries or occupations. Some allow for unobserved heterogeneity across workers, which creates substantial technical challenges. Some allow for life-cycle effects, where adjustment costs vary with age, and others allow adjustment costs to vary by gender. Others simplify the worker’s problem to embed it in a rich general equilibrium framework.
Some key results include: (a) Switching either industry or occupation tends to be very costly; usually more than a year’s average wages on average. (b) Given that moving costs change over time and workers are able to time their moves, realized costs are much lower, but the result is gradual adjustment, with a move to a new steady state that typically takes several years. (c) Idiosyncratic shocks to moving costs are quantitatively important, so that otherwise-identical workers often are seen moving in opposite directions at the same time. These shocks create a large role for option value, so that even if real wages in an industry are permanently lowered by a trade shock, a worker initially in that industry can benefit. This softens or reverses estimates of worker losses from, for example, the China shock. (d) Switching costs vary greatly by occupation, and can be very different for blue-collar and white-collar workers, for young and old workers, and for men and women. (e) Simple theories suggest that a shock results in wage overshooting, where the gap in wages between highly affected industries and others opens up and then shrinks over time, but evidence from Brazil shows that at least in some cases the wage differentials widen over time. (f) Some workers adjust through family changes. Evidence from Denmark shows that some women workers hit by import shocks withdraw from the labor market at least temporarily to marry and have children, unlike men.
Promising directions at the frontier include more work on longitudinal data; the role of capital adjustment; savings, risk aversion and the adjustment of trade deficits; responses in educational attainment; and much more exploration of the effects on family.
Article
Unintended Fertility: Trends, Causes, Consequences
Christine Piette Durrance and Melanie Guldi
Unintended fertility occurs when an individual, who did not intend to, becomes pregnant or gives birth. Most measures of unintended fertility account for whether the pregnancy (birth) was wanted and whether it occurred at a desired time. Economic models of fertility provide a framework for understanding an individual’s desire to have children (or not), the number of children to have alongside the quality of each child, and the timing of childbirth. To study fertility intendedness, researchers often classify pregnancies or births as unintended using self-reported retrospective (or prospective) survey responses. However, since survey information on the intendedness of pregnancies and births is not always available, the research on unintended fertility using survey data is necessarily limited to the population surveyed. Consequently, to broaden the population studied, researchers also often rely on reported births, abortions, and miscarriages (fetal deaths) to estimate intendedness. However, other factors (such as laws restricting access or financial hurdles to overcome) may restrict access to the methods used to control reproduction, and these restrictions in turn may influence realized (observed) pregnancies, births, and abortions. Furthermore, abortion and miscarriages are not consistently reported and, when reported, they exhibit more measurement error than births. Despite these research challenges, the available data have allowed researchers to glean information on trends in unintendedness and to study the relationship between fertility-related policies and unintendedness. Over the last 2 decades, unintended fertility has declined in many countries and fewer births are happening “too soon.” There are multiple factors underlying these changes, but changes in access to and quality of reproductive technologies, changes in macroeconomic conditions, and socioeconomic characteristics of fertility-aged individuals appear to be crucial drivers of these changes.
Article
Unobserved Components Models
Joanne Ercolani
Unobserved components models (UCMs), sometimes referred to as structural time-series models, decompose a time series into its salient time-dependent features. These typically characterize the trending behavior, seasonal variation, and (nonseasonal) cyclical properties of the time series. The components are usually specified in a stochastic way so that they can evolve over time, for example, to capture changing seasonal patterns. Among many other features, the UCM framework can incorporate explanatory variables, allowing outliers and structural breaks to be captured, and can deal easily with daily or weekly effects and calendar issues like moving holidays.
UCMs are easily constructed in state space form. This enables the application of the Kalman filter algorithms, through which maximum likelihood estimation of the structural parameters are obtained, optimal predictions are made about the future state vector and the time series itself, and smoothed estimates of the unobserved components can be determined. The stylized facts of the series are then established and the components can be illustrated graphically, so that one can, for example, visualize the cyclical patterns in the time series or look at how the seasonal patterns change over time. If required, these characteristics can be removed, so that the data can be detrended, seasonally adjusted, or have business cycles extracted, without the need for ad hoc filtering techniques. Overall, UCMs have an intuitive interpretation and yield results that are simple to understand and communicate to others. Factoring in its competitive forecasting ability, the UCM framework is hugely appealing as a modeling tool.
Article
Urbanization and Emerging Cities: Infrastructure and Housing
Gilles Duranton and Anthony J. Venables
Urbanization is a central challenge of our times. At its core, it is an urban development challenge that requires addressing transportation and housing in cities. Transport improvements can reduce travel times and improve the spatial reach of urban dwellers. But these improvements may be crowded out by latent demand for travel and may lead to worse congestion, pollution, and other negative externalities associated with urban traffic. To evaluate the effects of transport improvements, direct travel effects must be measured. Then, an improvement in traffic conditions somewhere may spill over to other areas. Firms and residents may also relocate, so economic growth close to a transport improvement may just result from a displacement of economic activity from other areas. Conversely, better accessibility is expected to foster agglomeration effects and increase productivity. Valuing these changes is difficult, as it requires being able to quantify many externalities such as congestion delays, scheduling gains, and greater job accessibility. Housing policies present different challenges. More fundamental policies seek to enable housing construction by offering more secure property rights, up-to-date land registries, and competent land-use planning—all complex endeavors and all necessary. Other housing policies rely on heavy government interventions to provide housing directly to large segments of the urban population. These policies often flop because governments fail to link housing provision with job accessibility and appropriate land-use planning. Housing is also an expensive asset that requires significant initial funding, while credit constraints abound in the urbanizing world. Policymakers also need to choose between small improvements to extremely low-quality informal housing, retrofitting modern housing in already-built urban areas, or urban expansion. All these options involve sharp trade-offs, subtle induced effects, and complex interactions with transport. All these effects are difficult to measure and challenging to value.
Article
Urban Sprawl and the Control of Land Use
Alex Anas
Urban sprawl in popular sources is vaguely defined and largely misunderstood, having acquired a pejorative meaning. Economists should ask whether particular patterns of urban land use are an outcome of an efficient allocation of resources. Theoretical economic modeling has been used to show that more not less, sprawl often improves economic efficiency. More sprawl can cause a reduction in traffic congestion. Job suburbanization can generally increase sprawl but improves economic efficiency. Limiting sprawl in some cities by direct control of the land use can increase sprawl in other cities, and aggregate sprawl in all cities combined can increase. That urban population growth causes more urban sprawl is verified by empirically implemented general equilibrium models, but—contrary to common belief—the increase in travel times that accompanies such sprawl are very modest. Urban growth boundaries to limit urban sprawl cause large deadweight losses by raising land prices and should be seen to be socially intolerable but often are not. It is good policy to use corrective taxation for negative externalities such as traffic congestion and to implement property tax reforms to reduce or eliminate distortive taxation. Under various circumstances such fiscal measures improve welfare by increasing urban sprawl. The flight of the rich from American central cities, large lot zoning in the suburbs, and the financing of schools by property tax revenues are seen as causes of sprawl. There is also evidence that more heterogeneity among consumers and more unequal income distributions cause more urban sprawl. The connections between agglomeration economies and urban sprawl are less clear. The emerging technology of autonomous vehicles can have major implications for the future of urban spatial structure and is likely to add to sprawl.
Article
Valuation of Health Risks
Henrik Andersson, Arne Risa Hole, and Mikael Svensson
Many public policies and individual actions have consequences for population health. To understand whether a (costly) policy undertaken to improve population health is a wise use of resources, analysts can use economic evaluation methods to assess the costs and benefits. To do this, it is necessary to evaluate the costs and benefits using the same metric, and for convenience, a monetary measure is commonly used. It is well established that money measures of a reduction in health risks can be theoretically derived using the willingness-to-pay concept. However, because a market price for health risks is not available, analysts have to rely on analytical techniques to estimate the willingness to pay using revealed- or stated-preference methods. Revealed-preference methods infer willingness to pay based on individuals’ actual behavior in markets related to health risks, and they include such approaches as hedonic pricing techniques. Stated-preference methods use a hypothetical market scenario in which respondents make trade-offs between wealth and health risks. Using, for example, a random utility framework, it is possible to directly estimate individuals’ willingness to pay by analyzing the trade-offs they make in the hypothetical scenario. Stated-preference methods are commonly applied using contingent valuation or discrete choice experiment techniques. Despite criticism and the shortcomings of both the revealed- and stated-preference methods, substantial progress has been made since the 1990s in using both approaches to estimate the willingness to pay for health-risk reductions.
Article
Value-Added Estimates of Teacher Effectiveness: Measurement, Uses, and Limitations
Jessalynn James and Susanna Loeb
Since the turn of the 21st century, an abundant body of research has demonstrated that teachers meaningfully contribute to their students’ learning but that teachers vary widely in their effectiveness. Measures of teachers’ “value added” to student achievement have become common, and sometimes controversial, tools for researchers and policymakers hoping to identify and differentiate teachers’ individual contributions to student learning. Value-added measures aim to identify how much more a given teacher’s students learn than what would be expected based on how much other, similar students learn with other teachers. The question of how to measure value added without substantial measurement error and without incorrectly capturing other factors outside of teachers’ control is complex and sometime illusory, and the advantages and drawbacks to any particular method of estimating teachers’ value added depend on the specific context and purpose for their use. Traditionally, researchers have calculated value-added scores only for the subset of teachers with students in tested grades and subjects—a relatively small proportion of the teaching force, in a narrow set of the many domains on which teachers may influence their students. More recently, researchers have created value-added estimates for a range of other student outcomes, including measures of students’ engagement and social-emotional learning such as attendance and behavioral incidences, which may be available for more teachers. Overall, teacher value-added measures can be useful tools for understanding and improving teaching and learning, but they have substantial limitations for many uses and contexts.
Article
Variations in the Adoption of Healthcare Innovation: A Literature Review
Marisa Miraldo, Katharina Hauck, Antoine Vernet, and Ana Wheelock
Major medical innovations have greatly increased the efficacy of treatments, improved patient outcomes, and often reduced the cost of medical care. However, innovations do not diffuse uniformly across and within health systems. Due to the high complexity of medical treatment decisions, variations in clinical practice are inherent to healthcare delivery, regardless of technological advances, new ways of working, funding, and burden of disease. In this article we conduct a narrative literature review to identify and discuss peer-reviewed articles presenting a theoretical framework or empirical evidence of the factors associated with the adoption of innovation and clinical practice.
We find that variation in innovation adoption and medical practice is associated with multiple factors. First, patients’ characteristics, including medical needs and genetic factors, can crucially affect clinical outcomes and the efficacy of treatments. Moreover, differences in patients’ preferences can be an important source of variation. Medical treatments may need to take such patient characteristics into account if they are to deliver optimal outcomes, and consequently, resulting practice variations should be considered warranted and in the best interests of patients. However, socioeconomic or demographic characteristics, such as ethnicity, income, or gender are often not considered legitimate grounds for differential treatment. Second, physician characteristics—such as socioeconomic profile, training, and work-related characteristics—are equally an influential component of practice variation. In particular, so-called “practice style” and physicians’ attitudes toward risk and innovation adoption are considered a major source of practice variation, but have proven difficult to investigate empirically. Lastly, features of healthcare systems—notably, public coverage of healthcare expenditure, cost-based reimbursement of providers, and service-delivery organization, are generally associated with higher utilization rates and adoption of innovation.
Research shows some successful strategies aimed at reducing variation in medical decision-making, such as the use of decision aids, data feedback, benchmarking, clinical practice guidelines, blinded report cards, and pay for performance. But despite these advances, there is uneven diffusion of new technologies and procedures, with potentially severe adverse efficiency and equity implications.
Article
What Causes Residential Mortgage Defaults?
Walter Torous, William Torous, and Anne Thompson
The U.S. residential mortgage market is very large, totaling approximately $13 trillion in debt as of 2023, and defaults occur with regular frequency, increasing significantly during times of financial stress. For example, during the aftermath of the Great Financial Crisis , almost 1 in 20 U.S. residential mortgages in 2010 were 90 plus days past due and considered in default. Given the adverse implications of default on a household's financial well being and ability to access housing, economists and policy makers have expended much effort to understand why households default on their mortgages. An important issue to resolve in this research is the relative importance of negative equity in a home versus adverse life events in driving mortgage default decisions. Why a household defaults matters. Forgiveness of principal is costly and addresses only defaults related to negative equity. By contrast, to the extent that defaults follow negative life events then lowering monthly mortgage payments while the household resolves its difficulties would allow families to keep their homes while not requiring banks to foreclose on them. To answer these questions requires understanding what factors “cause” mortgage defaults. While many papers document an association between default and various explanatory variables, an association does not necessarily imply causation. For example, it has been empirically documented in the aftermath of the Great Financial Crisis that homeowners residing in warm climates in the United States were more prone to default than homeowners residing in cold climates. Did climate cause mortgage defaults? While there certainly was an association between climate and mortgage defaults, no one has claimed that climate caused homeowners to default. Likewise, previous studies have documented an association, sometimes very strong, between default and negative equity but research based on a potential outcomes framework finds that very few defaults can be causally attributed to strategic reasons in which negative life events are not necessary. The stark difference in these conclusions necessitates a closer examination of what factors cause mortgage defaults. .
Article
What Drives HIV in Africa? Addressing Economic Gender Inequalities to Close the HIV Gender Gap
Aurélia Lépine, Henry Cust, and Carole Treibich
Ending HIV as a public health threat by 2030 presents challenges significantly different to those of the past 40 years. Initially perceived as a disease affecting gay men, today, HIV disproportionately affects adolescents and young women in Africa. Current strategies to prevent HIV mostly rely on using biomedical interventions to reduce the risk of infection during risky sex and to address that biologically; women are more vulnerable to HIV infection than men. Ongoing policies and strategies to end the AIDS epidemic in Africa are likely to fail if implemented alone, given they fail to address why vulnerable young women engage in risky sexual behaviors. Evidence strongly suggests economic vulnerability, rather than income level, is a primary driver of women's decision to engage in commercial and transactional sex. By viewing HIV through the lens of structural gender inequality, poverty, and use of risky sexual behaviors to cope with economic shocks, a new explanation for the HIV gender gap emerges. New and promising approaches to reduce HIV acquisition and transmission by protecting women from economic shocks and increasing their ability to participate in the economy have proven effective. Such interventions are vital to break the pattern of unequal HIV transmission against women and if HIV is to be beaten.
Article
Willingness to Pay in Hedonic Pricing Models
David Wolf and H. Allen Klaiber
The value of a differentiated product is simply the sum of its parts. This concept is easily observed in housing markets where the price of a home is determined by the underlying bundle of attributes that define it and by the price households are willing to pay for each attribute. These prices are referred to as implicit prices because their value is indirectly revealed through the price of another product (typically a home) and are of interest as they reveal the value of goods, such as nearby public amenities, that would otherwise remain unknown.
This concept was first formalized into a tractable theoretical framework by Rosen, and is known as the hedonic pricing method. The two-stage hedonic method requires the researcher to map housing attributes into housing price using an equilibrium price function. Information recovered from the first stage is then used to recover inverse demand functions for nonmarket goods in the second stage, which are required for nonmarginal welfare evaluation. Researchers have rarely implemented the second stage, however, due to limited data availability, specification concerns, and the inability to correct for simultaneity bias between price and quality. As policies increasingly seek to deliver large, nonmarginal changes in public goods, the need to estimate the hedonic second stage is becoming more poignant. Greater effort therefore needs to be made to establish a set of best practices within the second stage, many of which can be developed using methods established in the extensive first-stage literature.