Capital structure theories offer a framework to understand how firms determine their mix of debt and equity financing. These theories, such as the trade-off theory, pecking order theory, market timing theory, agency theory, and theories of corporate control and input/output market interactions, provide insights into the roles of internal company characteristics and external economic conditions for corporate financing decisions. They explain firms’ preferences for and access to different financing sources based on factors like tax benefits, bankruptcy costs, agency costs, information asymmetry costs, and market conditions. Internationally, these theories take on additional dimensions due to differences in tax regimes, legal and institutional environments, and market structures. For example, the trade-off theory, which balances the tax advantages of debt against the costs associated with financial distress, varies significantly across countries because of differing bankruptcy and tax laws. Similarly, the pecking order theory, which suggests firms prefer internal financing over external debt, and external debt over equity, is influenced by the development of financial markets and the level of information sharing in different countries. The market timing theory posits that firms capitalize on market conditions by timing their financing decisions based on market valuations of debt and equity, with its applicability differing internationally due to variations in economic cycles and investor sentiment across markets. Agency theory and theories of corporate control delve into how conflicts between managers, shareholders, and debt holders shape financial strategies, with variations arising from different corporate governance structures and enforcement levels globally. The input/output market interactions theory asserts that firms determine their capital structure based on their market position, which can vary significantly due to differing international market demands and competitive landscapes. Empirical research provides insights into how these diverse factors play out across different legal, regulatory, economic, and cultural environments. International studies have shown that leverage determinants like corporate and personal tax rates, corporate governance and ownership structure, market conditions, and institutional frameworks significantly impact capital structure decisions globally. Moreover, cultural differences also play a crucial role in shaping financial decisions, influencing managerial attitudes toward risk. These insights are critical for multinational corporations and policymakers, as they highlight the necessity of considering a broad array of factors, including tax considerations, market conditions, legal and social frameworks, corporate governance and ownership structure, investor behavior, and institutional and regulatory environments, when making decisions about capital structure in an international context. This comprehensive understanding helps in creating conducive environments for effective corporate financing choices on a global scale.
Article
Corporate Leverage: Insights From International Data
Özde Öztekin
Article
Estimation Error in Optimal Portfolio Allocation Problems
Jose Olmo
Markowitz showed that an investor who cares only about the mean and variance of portfolio returns should hold a portfolio on the efficient frontier. The application of this investment strategy proceeds in two steps. First, the statistical moments of asset returns are estimated from historical time series, and second, the mean-variance portfolio selection problem is solved separately, as if the estimates were the true parameters. The literature on portfolio decision acknowledges the difficulty in estimating means and covariances in many instances. This is particularly the case in high-dimensional settings. Merton notes that it is more difficult to estimate means than covariances and that errors in estimates of means have a larger impact on portfolio weights than errors in covariance estimates. Recent developments in high-dimensional settings have stressed the importance of correcting the estimation error of traditional sample covariance estimators for portfolio allocation. The literature has proposed shrinkage estimators of the sample covariance matrix and regularization methods founded on the principle of sparsity. Both methodologies are nested in a more general framework that constructs optimal portfolios under constraints on different norms of the portfolio weights including short-sale restrictions. On the one hand, shrinkage methods use a target covariance matrix and trade off bias and variance between the standard sample covariance matrix and the target. More prominence has been given to low-dimensional factor models that incorporate theoretical insights from asset pricing models. In these cases, one has to trade off estimation risk for model risk. Alternatively, the literature on regularization of the sample covariance matrix uses different penalty functions for reducing the number of parameters to be estimated. Recent methods extend the idea of regularization to a conditional setting based on factor models, which increase with the number of assets, and apply regularization methods to the residual covariance matrix.
Article
The Evolution of Forecast Density Combinations in Economics
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Article
Long Memory Models
Peter Robinson
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag
j
) do not vary over time, it typically implies that the autocovariances decay so slowly, as
j
increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag
j
. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
Article
Methodology of Macroeconometrics
Aris Spanos
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.
Article
Predictive Regressions
Jesús Gonzalo and Jean-Yves Pitarakis
Predictive regressions are a widely used econometric environment for assessing the predictability of economic and financial variables using past values of one or more predictors. The nature of the applications considered by practitioners often involve the use of predictors that have highly persistent, smoothly varying dynamics as opposed to the much noisier nature of the variable being predicted. This imbalance tends to affect the accuracy of the estimates of the model parameters and the validity of inferences about them when one uses standard methods that do not explicitly recognize this and related complications. A growing literature aimed at introducing novel techniques specifically designed to produce accurate inferences in such environments ensued. The frequent use of these predictive regressions in applied work has also led practitioners to question the validity of viewing predictability within a linear setting that ignores the possibility that predictability may occasionally be switched off. This in turn has generated a new stream of research aiming at introducing regime-specific behavior within predictive regressions in order to explicitly capture phenomena such as episodic predictability.
Article
Publication Bias in Asset Pricing Research
Andrew Y. Chen and Tom Zimmermann
Researchers are more likely to share notable findings. As a result, published findings tend to overstate the magnitude of real-world phenomena. This bias is a natural concern for asset pricing research, which has found hundreds of return predictors and little consensus on their origins.
Empirical evidence on publication bias comes from large-scale metastudies. Metastudies of cross-sectional return predictability have settled on four stylized facts that demonstrate publication bias is not a dominant factor: (a) almost all findings can be replicated, (b) predictability persists out-of-sample, (c) empirical t-statistics are much larger than 2.0, and (d) predictors are weakly correlated. Each of these facts has been demonstrated in at least three metastudies.
Empirical Bayes statistics turn these facts into publication bias corrections. Estimates from three metastudies find that the average correction (shrinkage) accounts for only 10%–15% of in-sample mean returns and that the risk of inference going in the wrong direction (the false discovery rate) is less than 10%.
Metastudies also find that t-statistic hurdles exceed 3.0 in multiple testing algorithms and that returns are 30%–50% weaker in alternative portfolio tests. These facts are easily misinterpreted as evidence of publication bias. Other misinterpretations include the conflating of phrases such as “mostly false findings” with “many insignificant findings,” “data snooping” with “liquidity effects,” and “failed replications” with “insignificant ad-hoc trading strategies.”
Cross-sectional predictability may not be representative of other fields. Metastudies of real-time equity premium prediction imply a much larger effect of publication bias, although the evidence is not nearly as abundant as it is in the cross section. Measuring publication bias in areas other than cross-sectional predictability remains an important area for future research.
Article
Reduced Rank Regression Models in Economics and Finance
Gianluca Cubadda and Alain Hecq
Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.
Article
Score-Driven Models: Methods and Applications
Mariia Artemova, Francisco Blasques, Janneke van Brummelen, and Siem Jan Koopman
The flexibility, generality, and feasibility of score-driven models have contributed much to the impact of score-driven models in both research and policy. Score-driven models provide a unified framework for modeling the time-varying features in parametric models for time series.
The predictive likelihood function is used as the driving mechanism for updating the time-varying parameters. It leads to a flexible, general, and intuitive way of modeling the dynamic features in the time series while the estimation and inference remain relatively simple. These properties remain valid when models rely on non-Gaussian densities and nonlinear dynamic structures. The class of score-driven models has become even more appealing since the developments in theory and methodology have progressed rapidly. Furthermore, new formulations of empirical dynamic models in this class have shown their relevance in economics and finance. In the context of macroeconomic studies, the key examples are nonlinear autoregressive, dynamic factor, dynamic spatial, and Markov-switching models. In the context of finance studies, the major examples are models for integer-valued time series, multivariate scale, and dynamic copula models. In finance applications, score-driven models are especially important because they provide particular updating mechanisms for time-varying parameters that limit the effect of the influential observations and outliers that are often present in financial time series.