The primary goals of food assistance programs are to alleviate child hunger and reduce food insecurity; if successful, such programs may have the added benefit of improving child academic outcomes (e.g., test scores, attendance, behavioral outcomes). Some U.S. government programs serve children in the home, such as the Supplemental Nutritional Assistance Program (SNAP), others serve them at school, such as the National School Lunch Program (NSLP) and School Breakfast Program (SBP), and still others fall in-between, such as the Summer Food Service Program (SFSP) and the Child and Adult Care Food Program (CACFP). Most empirical research seeking to identify the causal effect of such programs on child academic outcomes addresses the endogeneity of program participation with a reduced form, intent-to-treat approach. Specifically, such studies estimate the effect of a program’s availability, timing, or other specific feature on the academic outcomes of all potentially affected children. While findings of individual studies and interventions are mixed, some general conclusions emerge. First, increasing the availability of these programs typically has beneficial effects on relatively contemporaneous academic and behavioral outcomes. The magnitudes are modest but still likely pass cost-benefit criteria, even ignoring the fact that the primary objective of such programs is alleviating hunger, not improving academic outcomes. Less is known about the dynamics of the effects, for example, whether such effects are temporary boosts that dissipate or instead accumulate and grow over time. Likewise, the effects of recent innovations to these programs, such as breakfast in the classroom or increases in SNAP benefits to compensate for reduced time in school during the pandemic, yield less clear conclusions (the former) and/or have not been studied (the latter). Finally, many smaller programs that likely target the neediest children remain under- or un-examined. Unstudied government-provided programs include SFSP and CACFP. There are also a growing number of understudied programs provided primarily by charitable organizations. Emerging evidence suggests that one such program, Weekend Feeding or “Backpack” programs, confers substantial benefits. There, too, more work needs to be done, both to confirm these early findings and to explore recent innovations such as providing food pantries or “Kids’ Cafés” on school grounds. Especially in light of the uncertain fate of many pandemic-related program expansions and innovations, current empirical evidence establishes that the additional, beneficial spillover effects to academic outcomes—beyond the primary objective of alleviating food insecurity—deserve to be considered as well.
1-20 of 372 Results
Article
The Academic Effects of United States Child Food Assistance Programs—At Home, School, and In-Between
Michael D. Kurtz, Karen Smith Conway, and Robert D. Mohr
Article
Adaptive Learning in Macroeconomics
George W. Evans and Bruce McGough
While rational expectations (RE) remains the benchmark paradigm in macro-economic modeling, bounded rationality, especially in the form of adaptive learning, has become a mainstream alternative. Under the adaptive learning (AL) approach, economic agents in dynamic, stochastic environments are modeled as adaptive learners forming expectations and making decisions based on forecasting rules that are updated in real time as new data become available. Their decisions are then coordinated each period via the economy’s markets and other relevant institutional architecture, resulting in a time-path of economic aggregates. In this way, the AL approach introduces additional dynamics into the model—dynamics that can be used to address myriad macroeconomic issues and concerns, including, for example, empirical fit and the plausibility of specific rational expectations equilibria.
AL can be implemented as reduced-form learning, that is, the implementation of learning at the aggregate level, or alternatively, as discussed in a companion contribution to this Encyclopedia, Evans and McGough, as agent-level learning, which includes pre-aggregation analysis of boundedly rational decision making.
Typically learning agents are assumed to use estimated linear forecast models, and a central formulation of AL is least-squares learning in which agents recursively update their estimated model as new data become available. Key questions include whether AL will converge over time to a specified RE equilibrium (REE), in which cases we say the REE is stable under AL; in this case, it is also of interest to examine what type of learning dynamics are observed en route. When multiple REE exist, stability under AL can act as a selection criterion, and global dynamics can involve switching between local basins of attraction. In models with indeterminacy, AL can be used to assess whether agents can learn to coordinate their expectations on sunspots.
The key analytical concepts and tools are the E-stability principle together with the E-stability differential equations, and the theory of stochastic recursive algorithms (SRA). While, in general, analysis of SRAs is quite technical, application of the E-stability principle is often straightforward.
In addition to equilibrium analysis in macroeconomic models, AL has many applications. In particular, AL has strong implications for the conduct of monetary and fiscal policy, has been used to explain asset price dynamics, has been shown to improve the fit of estimated dynamic stochastic general equilibrium (DSGE) models, and has been proven useful in explaining experimental outcomes.
Article
Administrative Law: Governing Economic and Social Governance
Cary Coglianese
Administrative law refers to the body of legal doctrines, procedures, and practices that govern the operation of the myriad regulatory bodies and other administrative agencies that interact directly with individuals and businesses to shape economic and social outcomes. This law takes many forms in different legal systems around the world, but even different systems of administrative law share a focus on three major issues: the formal structures of administrative agencies; the procedures that these agencies must follow to make regulations, grant licenses, or pursue other actions; and the doctrines governing judicial review of administrative decisions. In addressing these issues, administrative law is intended to combat conditions of interest group capture and help ensure agencies make decisions that promote the public welfare by making government fair, accurate, and rational.
Article
Agent-Level Adaptive Learning
George W. Evans and Bruce McGough
Adaptive learning is a boundedly rational alternative to rational expectations that is increasingly used in macroeconomics, monetary economics, and financial economics. The agent-level approach can be used to provide microfoundations for adaptive learning in macroeconomics.
Two central issues of bounded rationality are simultaneously addressed at the agent level: replacing fully rational expectations of key variables with econometric forecasts and boundedly optimal decisions-making based on those forecasts. The real business cycle (RBC) model provides a useful laboratory for exhibiting alternative implementations of the agent-level approach. Specific implementations include shadow-price learning (and its anticipated-utility counterpart, iterated shadow-price learning), Euler-equation learning, and long-horizon learning. For each implementation the path of the economy is obtained by aggregating the boundedly rational agent-level decisions.
A linearized RBC can be used to illustrate the effects of fiscal policy. For example, simulations can be used to illustrate the impact of a permanent increase in government spending and highlight the similarities and differences among the various implements of agent-level learning. These results also can be used to expose the differences among agent-level learning, reduced-form learning, and rational expectations.
The different implementations of agent-level adaptive learning have differing advantages. A major advantage of shadow-price learning is its ease of implementation within the nonlinear RBC model. Compared to reduced-form learning, which is widely use because of its ease of application, agent-level learning both provides microfoundations, which ensure robustness to the Lucas critique, and provides the natural framework for applications of adaptive learning in heterogeneous-agent models.
Article
Age-Period-Cohort Models
Zoë Fannon and Bent Nielsen
Outcomes of interest often depend on the age, period, or cohort of the individual observed, where cohort and age add up to period. An example is consumption: consumption patterns change over the lifecycle (age) but are also affected by the availability of products at different times (period) and by birth-cohort-specific habits and preferences (cohort). Age-period-cohort (APC) models are additive models where the predictor is a sum of three time effects, which are functions of age, period, and cohort, respectively. Variations of these models are available for data aggregated over age, period, and cohort, and for data drawn from repeated cross-sections, where the time effects can be combined with individual covariates.
The age, period, and cohort time effects are intertwined. Inclusion of an indicator variable for each level of age, period, and cohort results in perfect collinearity, which is referred to as “the age-period-cohort identification problem.” Estimation can be done by dropping some indicator variables. However, dropping indicators has adverse consequences such as the time effects are not individually interpretable and inference becomes complicated. These consequences are avoided by instead decomposing the time effects into linear and non-linear components and noting that the identification problem relates to the linear components, whereas the non-linear components are identifiable. Thus, confusion is avoided by keeping the identifiable non-linear components of the time effects and the unidentifiable linear components apart. A variety of hypotheses of practical interest can be expressed in terms of the non-linear components.
Article
Aging and Healthcare Costs
Martin Karlsson, Tor Iversen, and Henning Øien
An open issue in the economics literature is whether healthcare expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “aging of Population and HealthCare Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the red herring hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.
Article
The American Housing Finance System: Structure, Evolution, and Implications
Yongheng Deng, Susan M. Wachter, and Heejin Yoon
The U.S. housing finance system has been characterized by fixed-rate, long-term, and high maximum loan-to-value ratio mortgage loans, with unique support from secondary market entities Ginnie Mae and the government-sponsored enterprises, Fannie Mae and Freddie Mac. The authors provide a comprehensive review of the U.S. housing finance system, from its structure and evolution to the current continuing policy debate. The “American Mortgage” provides many more options to borrowers than are commonly provided elsewhere: U.S. homebuyers can choose whether to pay a fixed or floating rate of interest; they can lock in their interest rate in between the time they apply for the mortgage and the time they purchase their house; they can choose the time at which the mortgage rate resets; they can choose the term and the amortization period; they can generally prepay without penalty; and they can generally borrow against home equity. They can also obtain insured home mortgages at attractive terms with very low down payments. Perhaps most importantly, in the typical mortgage, payments remain constant throughout the potentially 30-year term of the loan. The unique characteristics of the U.S. mortgage provide substantial benefits for American homeowners and the overall stability of the economy. This article describes the evolution of the housing finance system which has led to the predominant role of this mortgage instrument in the United States.
Article
An Analysis of COVID-19 Student Learning Loss
Harry Patrinos, Emiliana Vegas, and Rohan Carter-Rau
The coronavirus disease 2019 (COVID-19) pandemic led to school closures around the world, affecting almost 1.6 billion students. This caused significant disruption to the global education system. Even short interruptions in a child’s schooling have significant negative effects on their learning and can be long lasting. The capacities of education systems to respond to the crisis by delivering remote learning and support to children and families have been diverse and uneven.
In response to this disruption, education researchers are beginning to analyze the impact of these school closures on student learning loss. The term learning loss is commonly used in the literature to describe declines in student knowledge and skills. Early reviews of the first wave of lockdowns and school closures suggested significant learning loss in a few countries. A more recent and thorough analysis of recorded learning loss evidence documented since the beginning of the school closures between March 2020 and March 2022 found even more evidence of learning loss. In 36 identified robust studies, the majority identified learning losses that amount to, on average, 0.17 of a standard deviation (SD), equivalent to roughly a one-half school year’s worth of learning. This confirms that learning loss is real and significant and has continued to grow after the first year of the COVID-19 pandemic. Most studies observed increases in inequality where certain demographics of students experienced more significant learning losses than others. The longer the schools remained closed, the greater were the learning losses. For the 19 countries for which there are robust learning loss data, average school closures were 15 weeks, leading to average learning losses of 0.18 SD. Put another way, for every week that schools were closed, learning declined by an average of 0.01 SD.
However, there are also outliers—countries that managed to limit the amount of loss. In Nara City, Japan, for example, the initial closures had brought down test scores, but responsive policies largely overcame this decline. In addition, a decreased summer vacation helped. In Denmark, children received good home support and their reading behavior improved significantly. In Sweden, where primary schools did not close during the pandemic, there were no reported learning losses. Further work is needed to increase the quantity of studies produced, particularly in low- and middle-income countries, and to ascertain the reasons for learning loss. Finally, the few cases where learning loss was mitigated should be further investigated to inform continued and future pandemic responses.
Article
Anthropometrics: The Intersection of Economics and Human Biology
John Komlos
Anthropometrics is a research program that explores the extent to which economic processes affect human biological processes using height and weight as markers. This agenda differs from health economics in the sense that instead of studying diseases or longevity, macro manifestations of well-being, it focuses on cellular-level processes that determine the extent to which the organism thrives in its socio-economic and epidemiological environment. Thus, anthropometric indicators are used as a proxy measure for the biological standard of living as complements to conventional measures based on monetary units.
Using physical stature as a marker, we enabled the profession to learn about the well-being of children and youth for whom market-generated monetary data are not abundant even in contemporary societies. It is now clear that economic transformations such as the onset of the Industrial Revolution and modern economic growth were accompanied by negative externalities that were hitherto unknown. Moreover, there is plenty of evidence to indicate that the Welfare States of Western and Northern Europe take better care of the biological needs of their citizens than the market-oriented health-care system of the United States.
Obesity has reached pandemic proportions in the United States affecting 40% of the population. It is fostered by a sedentary and harried lifestyle, by the diminution in self-control, the spread of labor-saving technologies, and the rise of instant gratification characteristic of post-industrial society. The spread of television and a fast-food culture in the 1950s were watershed developments in this regard that accelerated the process. Obesity poses a serious health risk including heart disease, stroke, diabetes, and some types of cancer and its cost reaches $150 billion per annum in the United States or about $1,400 per capita. We conclude that the economy influences not only mortality and health but reaches bone-deep into the cellular level of the human organism. In other words, the economy is inextricably intertwined with human biological processes.
Article
Antitrust Law as a Problem in Economics
Chris Sagers
“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory.
The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.
Article
Applications of Web Scraping in Economics and Finance
Piotr Śpiewanowski, Oleksandr Talavera, and Linh Vi
The 21st-century economy is increasingly built around data. Firms and individuals upload and store enormous amount of data. Most of the produced data is stored on private servers, but a considerable part is made publicly available across the 1.83 billion websites available online. These data can be accessed by researchers using web-scraping techniques.
Web scraping refers to the process of collecting data from web pages either manually or using automation tools or specialized software. Web scraping is possible and relatively simple thanks to the regular structure of the code used for websites designed to be displayed in web browsers. Websites built with HTML can be scraped using standard text-mining tools, either scripts in popular (statistical) programming languages such as Python, Stata, R, or stand-alone dedicated web-scraping tools. Some of those tools do not even require any prior programming skills.
Since about 2010, with the omnipresence of social and economic activities on the Internet, web scraping has become increasingly more popular among academic researchers. In contrast to proprietary data, which might not be feasible due to substantial costs, web scraping can make interesting data sources accessible to everyone.
Thanks to web scraping, the data are now available in real time and with significantly more details than what has been traditionally offered by statistical offices or commercial data vendors. In fact, many statistical offices have started using web-scraped data, for example, for calculating price indices. Data collected through web scraping has been used in numerous economic and finance projects and can easily complement traditional data sources.
Article
A Review of the Effects of Pay Transparency
Emma Duchini, Stefania Simion, and Arthur Turrell
An increasing number of countries have introduced pay transparency policies with the aim of reducing gender inequality in the labor market. Firms subject to transparency requirements must disclose publicly or to employees’ representatives information on their employees’ pay broken down by gender, or indicators of gender gaps in pay and career outcomes. The argument at the base of these policies is that gender inequality may in part persist because it is hidden. On the one hand, employers rarely keep track of employees’ pay and career progression by gender, and, on the other hand, employees rarely engage in conversations with their colleagues about pay. The lack of information on within-firm disparities by gender may therefore hamper progress toward a more egalitarian labor market. Transparency policies have the potential to improve women’s relative pay and career outcomes for two reasons. First, by increasing the salience of gender gaps in the labor market, they can alter the relative bargaining power of male and female employees vis-à-vis the firm and lead lower-paid individuals to demand higher pay from their employer. Second, together with pressures from employees, the public availability of information on firms’ gender-equality performance may also increase public pressure for firms’ action in this domain. A clear message emerges from the literature analyzing the impact of pay transparency policies on gender inequality: these policies are effective at pushing firms to reduce their gender pay gaps, although this is achieved via a slowdown of men’s wage growth. Related results point to a reduction in labor productivity following the introduction of transparency mandates but no detrimental effect on firms’ profits because this effect is compensated by the reduction in labor costs. Overall, the findings in this literature suggest that transparency policies can reduce the gender pay gap with limited costs for firms but may not be suited to achieve the objective of improving outcomes for lower-paid employees.
Article
Assessments in Education
Hans Henrik Sievertsen
Assessments like standardized tests and teacher evaluations are central elements of educational systems. Assessments affect the behaviour of students, teachers, parents, schools, and policymakers through at least two channels: The information channel and the incentive channel. Students use the information to adjust study effort and to guide their course selection. Schools and teachers use information from assessments to evaluate teaching quality and the effectiveness of the applied methods. Educational programs use information from assessment results to sort students in educational programs and employers use the results as signals of productivity in their hiring decisions. Finally, policymakers use assessments in accountability systems to reward or penalize schools, and parents use information from assessment results to select schools. The incentive channel is a natural consequence of the information channel: Students are incentivized to work hard and do well in assessments to get access to educational programs and jobs. Teachers and schools are incentivized to do well to receive rewards or avoid punishments in accountability systems. The information channel is important for ensuring the most efficient human capital investments: students learn about the returns and costs of effort investments and about their abilities and comparative advantages. Teachers and schools learn about the most effective teaching methods. However, because of the strong incentives linked to assessments, both students and teachers might focus on optimizing assessment results at the cost of learning. Students might for example select tracks that maximize their grades instead of selecting tracks aligned with their interests and comparative advantages. Understanding the implications of assessments for the behaviour of students, parents, teachers, and schools is therefore necessary to achieve the overall goals of the educational system. Because education affects lifetime earnings, health, and well-being and assessments play an important role in individuals’ educational careers, assessments are also important for efficiency and equity across domains. Biases in assessments and the heterogeneity in access to assessments are sources of inequality in education according to gender, origin, and socioeconomic background. Finally, because assessment results also carry important consequences for individuals’ educational opportunities and in the labor market, they are a source of stress and reduced well-being.
Article
Asset Pricing: Cross-Section Predictability
Paolo Zaffaroni and Guofu Zhou
A fundamental question in finance is the study of why different assets have different expected returns, which is intricately linked to the issue of cross-section prediction in the sense of addressing the question “What explains the cross section of expected returns?” There is vast literature on this topic. There are state-of-the-art methods used to forecast the cross section of stock returns with firm characteristics predictors, and the same methods can be applied to other asset classes, such as corporate bonds and foreign exchange rates, and to managed portfolios such mutual and hedge funds.
First, there are the traditional ordinary least squares and weighted least squares methods, as well as the recently developed various machine learning approaches such as neutral networks and genetic programming. These are the main methods used today in applications. There are three measures that assess how the various methods perform. The first is the Sharpe ratio of a long–short portfolio that longs the assets with the highest predicted return and shorts those with the lowest. This measure provides the economic value for one method versus another. The second measure is an out-of-sample
R
2
that evaluates how the forecasts perform relative to a natural benchmark that is the cross-section mean. This is important as any method that fails to outperform the benchmark is questionable. The third measure is how well the predicted returns explain the realized ones. This provides an overall error assessment cross all the stocks.
Factor models are another tool used to understand cross-section predictability. This sheds light on whether the predictability is due to mispricing or risk exposure. There are three ways to consider these models: First, we can consider how to test traditional factor models and estimate the associated risk premia, where the factors are specified ex ante. Second, we can analyze similar problems for latent factor models. Finally, going beyond the traditional setup, we can consider recent studies on asset-specific risks. This analysis provides the framework to understand the economic driving forces of predictability.
Article
Asset Pricing: Time-Series Predictability
David E. Rapach and Guofu Zhou
Asset returns change with fundamentals and other factors, such as technical information and sentiment over time. In modeling time-varying expected returns, this article focuses on the out-of-sample predictability of the aggregate stock market return via extensions of the conventional predictive regression approach.
The extensions are designed to improve out-of-sample performance in realistic environments characterized by large information sets and noisy data. Large information sets are relevant because there are a plethora of plausible stock return predictors. The information sets include variables typically associated with a rational time-varying market risk premium, as well as variables more likely to reflect market inefficiencies resulting from behavioral influences and information frictions. Noisy data stem from the intrinsically large unpredictable component in stock returns. When forecasting with large information sets and noisy data, it is vital to employ methods that incorporate the relevant information in the large set of predictors in a manner that guards against overfitting the data.
Methods that improve out-of-sample market return prediction include forecast combination, principal component regression, partial least squares, the LASSO and elastic net from machine learning, and a newly developed C-ENet approach that relies on the elastic net to refine the simple combination forecast. Employing these methods, a number of studies provide statistically and economically significant evidence that the aggregate market return is predictable on an out-of-sample basis. Out-of-sample market return predictability based on a rich set of predictors thus appears to be a well-established empirical result in asset pricing.
Article
A Survey of Econometric Approaches to Convergence Tests of Emissions and Measures of Environmental Quality
Junsoo Lee, James E. Payne, and Md. Towhidul Islam
The analysis of convergence behavior with respect to emissions and measures of environmental quality can be categorized into four types of tests: absolute and conditional β-convergence, σ-convergence, club convergence, and stochastic convergence. In the context of emissions, absolute β-convergence occurs when countries with high initial levels of emissions have a lower emission growth rate than countries with low initial levels of emissions. Conditional β-convergence allows for possible differences among countries through the inclusion of exogenous variables to capture country-specific effects. Given that absolute and conditional β-convergence do not account for the dynamics of the growth process, which can potentially lead to dynamic panel data bias, σ-convergence evaluates the dynamics and intradistributional aspects of emissions to determine whether the cross-section variance of emissions decreases over time. The more recent club convergence approach tests the decline in the cross-sectional variation in emissions among countries over time and whether heterogeneous time-varying idiosyncratic components converge over time after controlling for a common growth component in emissions among countries. In essence, the club convergence approach evaluates both conditional σ- and β-convergence within a panel framework. Finally, stochastic convergence examines the time series behavior of a country’s emissions relative to another country or group of countries. Using univariate or panel unit root/stationarity tests, stochastic convergence is present if relative emissions, defined as the log of emissions for a particular country relative to another country or group of countries, is trend-stationary.
The majority of the empirical literature analyzes carbon dioxide emissions and varies in terms of both the convergence tests deployed and the results. While the results supportive of emissions convergence for large global country coverage are limited, empirical studies that focus on country groupings defined by income classification, geographic region, or institutional structure (i.e., EU, OECD, etc.) are more likely to provide support for emissions convergence. The vast majority of studies have relied on tests of stochastic convergence with tests of σ-convergence and the distributional dynamics of emissions less so. With respect to tests of stochastic convergence, an alternative testing procedure accounts for structural breaks and cross-correlations simultaneously is presented. Using data for OECD countries, the results based on the inclusion of both structural breaks and cross-correlations through a factor structure provides less support for stochastic convergence when compared to unit root tests with the inclusion of just structural breaks.
Future studies should increase focus on other air pollutants to include greenhouse gas emissions and their components, not to mention expanding the range of geographical regions analyzed and more robust analysis of the various types of convergence tests to render a more comprehensive view of convergence behavior. The examination of convergence through the use of eco-efficiency indicators that capture both the environmental and economic effects of production may be more fruitful in contributing to the debate on mitigation strategies and allocation mechanisms.
Article
Bayesian Statistical Economic Evaluation Methods for Health Technology Assessment
Andrea Gabrio, Gianluca Baio, and Andrea Manca
The evidence produced by healthcare economic evaluation studies is a key component of any Health Technology Assessment (HTA) process designed to inform resource allocation decisions in a budget-limited context. To improve the quality (and harmonize the generation process) of such evidence, many HTA agencies have established methodological guidelines describing the normative framework inspiring their decision-making process. The information requirements that economic evaluation analyses for HTA must satisfy typically involve the use of complex quantitative syntheses of multiple available datasets, handling mixtures of aggregate and patient-level information, and the use of sophisticated statistical models for the analysis of non-Normal data (e.g., time-to-event, quality of life and costs). Much of the recent methodological research in economic evaluation for healthcare has developed in response to these needs, in terms of sound statistical decision-theoretic foundations, and is increasingly being formulated within a Bayesian paradigm. The rationale for this preference lies in the fact that by taking a probabilistic approach, based on decision rules and available information, a Bayesian economic evaluation study can explicitly account for relevant sources of uncertainty in the decision process and produce information to identify an “optimal” course of actions. Moreover, the Bayesian approach naturally allows the incorporation of an element of judgment or evidence from different sources (e.g., expert opinion or multiple studies) into the analysis. This is particularly important when, as often occurs in economic evaluation for HTA, the evidence base is sparse and requires some inevitable mathematical modeling to bridge the gaps in the available data. The availability of free and open source software in the last two decades has greatly reduced the computational costs and facilitated the application of Bayesian methods and has the potential to improve the work of modelers and regulators alike, thus advancing the fields of economic evaluation of healthcare interventions. This chapter provides an overview of the areas where Bayesian methods have contributed to the address the methodological needs that stem from the normative framework adopted by a number of HTA agencies.
Article
Bayesian Vector Autoregressions: Applications
Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Article
Bayesian Vector Autoregressions: Estimation
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Article
Behavioral and Social Corporate Finance
Henrik Cronqvist and Désirée-Jessica Pély
Corporate finance is about understanding the determinants and consequences of the investment and financing policies of corporations. In a standard neoclassical profit maximization framework, rational agents, that is, managers, make corporate finance decisions on behalf of rational principals, that is, shareholders. Over the past two decades, there has been a rapidly growing interest in augmenting standard finance frameworks with novel insights from cognitive psychology, and more recently, social psychology and sociology. This emerging subfield in finance research has been dubbed behavioral corporate finance, which differentiates between rational and behavioral agents and principals.
The presence of behavioral shareholders, that is, principals, may lead to market timing and catering behavior by rational managers. Such managers will opportunistically time the market and exploit mispricing by investing capital, issuing securities, or borrowing debt when costs of capital are low and shunning equity, divesting assets, repurchasing securities, and paying back debt when costs of capital are high. Rational managers will also incite mispricing, for example, cater to non-standard preferences of shareholders through earnings management or by transitioning their firms into an in-fashion category to boost the stock’s price.
The interaction of behavioral managers, that is, agents, with rational shareholders can also lead to distortions in corporate decision making. For example, managers may perceive fundamental values differently and systematically diverge from optimal decisions. Several personal traits, for example, overconfidence or narcissism, and environmental factors, for example, fatal natural disasters, shape behavioral managers’ preferences and beliefs, short or long term. These factors may bias the value perception by managers and thus lead to inferior decision making.
An extension of behavioral corporate finance is social corporate finance, where agents and principals do not make decisions in a vacuum but rather are embedded in a dynamic social environment. Since managers and shareholders take a social position within and across markets, social psychology and sociology can be useful to understand how social traits, states, and activities shape corporate decision making if an individual’s psychology is not directly observable.