41-51 of 51 Results  for:

  • Macroeconomics and Monetary Economics x
Clear all

Article

Q-Factors and Investment CAPM  

Lu Zhang

The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests. The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low). As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.

Article

Reduced Rank Regression Models in Economics and Finance  

Gianluca Cubadda and Alain Hecq

Reduced rank regression (RRR) has been extensively employed for modelling economic and financial time series. The main goals of RRR are to specify and estimate models that are capable of reproducing the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Although cointegration analysis is likely the most prominent example of the use of RRR in econometrics, a large body of research is aimed at detecting and modelling co-movements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions, which simplify complex dynamics and thus make interpretations easier, as well as the pursuit of efficiency gains in both estimation and prediction. Via the final equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA (autoregressive integrated moving average) models. RRR’s drawback, which is common to all of the dimension reduction techniques, is that the underlying restrictions may or may not be present in the data.

Article

Religiosity and Development  

Jeanet Sinding Bentzen

Economics of religion is the application of economic methods to the study of causes and consequences of religion. Ever since Max Weber set forth his theory of the Protestant ethic, social scientists have compared socioeconomic differences across Protestants and Catholics, Muslims, and Christians, and more recently across different intensities of religiosity. Religiosity refers to an individual’s degree of religious attendance and strength of beliefs. Religiosity rises with a growing demand for religion resulting from adversity and insecurity or a surging supply of religion stemming from increasing numbers of religious organizations, for instance. Religiosity has fallen in some Western countries since the mid-20th century, but has strengthened in several other societies around the world. Religion is a multidimensional concept, and religiosity has multiple impacts on socioeconomic outcomes, depending on the dimension observed. Religion covers public religious activities such as church attendance, which involves exposure to religious doctrines and to fellow believers, potentially strengthening social capital and trust among believers. Religious doctrines teach belief in supernatural beings, but also social views on hard work, refraining from deviant activities, and adherence to traditional norms. These norms and social views are sometimes orthogonal to the general tendency of modernization, and religion may contribute to the rising polarization on social issues regarding abortion, LGBT rights, women, and immigration. These norms and social views are again potentially in conflict with science and innovation, incentivizing some religious authorities to curb scientific progress. Further, religion encompasses private religious activities such as prayer and the particular religious beliefs, which may provide comfort and buffering against stressful events. At the same time, rulers may exploit the existence of belief in higher powers for political purposes. Empirical research supports these predictions. Consequences of higher religiosity include more emphasis on traditional values such as traditional gender norms and attitudes against homosexuality, lower rates of technical education, restrictions on science and democracy, rising polarization and conflict, and lower average incomes. Positive consequences of religiosity include improved health and depression rates, crime reduction, increased happiness, higher prosociality among believers, and consumption and well-being levels that are less sensitive to shocks.

Article

The Role of Wage Formation in Empirical Macroeconometric Models  

Ragnar Nymoen

The specification of model equations for nominal wage setting has important implications for the properties of macroeconometric models and requires system thinking and multiple equation modeling. The main models classes are the Phillips curve model (PCM), the wage–price equilibrium correction model (WP-ECM), and the New Keynesian Phillips curve (NKPCM). The PCM was included in the macroeconometric models of the 1960s. The WP‑ECM arrived in the late 1980s. The NKPCM is central in dynamic stochastic general equilibrium models (DSGEs). The three model classes can be interpreted as different specifications of the system of stochastic difference equations that define the supply side of a medium-term macroeconometric model. This calls for an appraisal of the different wage models, in particular in relation to the concept of the non-accelerating inflation rate of unemployment (NAIRU, or natural rate of unemployment), and of the methods and research strategies used. The construction of macroeconomic model used to be based on the combination of theoretical and practical skills in economic modeling. Wage formation was viewed as being forged between the forces of markets and national institutions. In the age of DSGE models, macroeconomics has become more of a theoretical discipline. Nevertheless, producers of DSGE models make use of hybrid forms if an initial theoretical specification fails to meet a benchmark for acceptable data fit. A common ground therefore exists between the NKPC, WP‑ECM, and PCM, and it is feasible to compare the model types empirically.

Article

Shocks, Information, and Structural VARs  

Luca Gambetti

Structural Vector Autoregressions (SVARs) have become one of the most popular tools to measure the effects of structural economic shocks. Several new techniques to “identify” economic shocks have been proposed in the literature in the last decades. Identification hinges on the implicit assumption that economic shocks are retrievable from the data. In other words, the data contain enough information to correctly estimate the shocks. SVAR models, however, are small-scale models, only a small number of variables can be handled, and this feature can forcefully limit the amount of information that variables can convey. Narrow information sets present problems for identification, but some theoretical results and empirical procedures can test whether such information is sufficient to estimate economic shocks. Additionally, there are possible solutions to the problem of limited information, such as Factor Augmented VAR or dynamic rotations.

Article

Sparse Grids for Dynamic Economic Models  

Johannes Brumm, Christopher Krause, Andreas Schaab, and Simon Scheidegger

Solving dynamic economic models that capture salient real-world heterogeneity and nonlinearity requires the approximation of high-dimensional functions. As their dimensionality increases, compute time and storage requirements grow exponentially. Sparse grids alleviate this curse of dimensionality by substantially reducing the number of interpolation nodes, that is, grid points needed to achieve a desired level of accuracy. The construction principle of sparse grids is to extend univariate interpolation formulae to the multivariate case by choosing linear combinations of tensor products in a way that reduces the number of grid points by orders of magnitude relative to a full tensor-product grid and doing so without substantially increasing interpolation errors. The most popular versions of sparse grids used in economics are (dimension-adaptive) Smolyak sparse grids that use global polynomial basis functions, and (spatially adaptive) sparse grids with local basis functions. The former can economize on the number of interpolation nodes for sufficiently smooth functions, while the latter can also handle non-smooth functions with locally distinct behavior such as kinks. In economics, sparse grids are particularly useful for interpolating the policy and value functions of dynamic models with state spaces between two and several dozen dimensions, depending on the application. In discrete-time models, sparse grid interpolation can be embedded in standard time iteration or value function iteration algorithms. In continuous-time models, sparse grids can be embedded in finite-difference methods for solving partial differential equations like Hamilton-Jacobi-Bellman equations. In both cases, local adaptivity, as well as spatial adaptivity, can add a second layer of sparsity to the fundamental sparse-grid construction. Beyond these salient use-cases in economics, sparse grids can also accelerate other computational tasks that arise in high-dimensional settings, including regression, classification, density estimation, quadrature, and uncertainty quantification.

Article

Stochastic Volatility in Bayesian Vector Autoregressions  

Todd E. Clark and Elmar Mertens

Vector autoregressions with stochastic volatility (SV) are widely used in macroeconomic forecasting and structural inference. The SV component of the model conveniently allows for time variation in the variance-covariance matrix of the model’s forecast errors. In turn, that feature of the model generates time variation in predictive densities. The models are most commonly estimated with Bayesian methods, most typically Markov chain Monte Carlo methods, such as Gibbs sampling. Equation-by-equation methods developed since 2018 enable the estimation of models with large variable sets at much lower computational cost than the standard approach of estimating the model as a system of equations. The Bayesian framework also facilitates the accommodation of mixed frequency data, non-Gaussian error distributions, and nonparametric specifications. With advances made in the 21st century, researchers are also addressing some of the framework’s outstanding challenges, particularly the dependence of estimates on the ordering of variables in the model and reliable estimation of the marginal likelihood, which is the fundamental measure of model fit in Bayesian methods.

Article

Stock-Flow Models of Market Frictions and Search  

Eric Smith

Stock-flow matching is a simple and elegant framework of dynamic trade in differentiated goods. Flows of entering traders match and exchange with the stocks of previously unsuccessful traders on the other side of the market. A buyer or seller who enters a market for a single, indivisible good such as a job or a home does not experience impediments to trade. All traders are fully informed about the available trading options; however, each of the available options in the stock on the other side of the market may or may not be suitable. If fortunate, this entering trader immediately finds a viable option in the stock of available opportunities and trade occurs straightaway. If unfortunate, none of the available opportunities suit the entrant. This buyer or seller now joins the stocks of unfulfilled traders who must wait for a new, suitable partner to enter. Three striking empirical regularities emerge from this microstructure. First, as the stock of buyers does not match with the stock of sellers, but with the flow of new sellers, the flow of new entrants becomes an important explanatory variable for aggregate trading rates. Second, the traders’ exit rates from the market are initially high, but if they fail to match quickly the exit rates become substantially slower. Third, these exit rates depend on different variables at different phases of an agent’s stay in the market. The probability that a new buyer will trade successfully depends only on the stock of sellers in the market. In contrast, the exit rate of an old buyer depends positively on the flow of new sellers, negatively on the stock of old buyers, and is independent of the stock of sellers. These three empirical relationships not only differ from those found in the familiar search literature but also conform to empirical evidence observed from unemployment outflows. Moreover, adopting the stock-flow approach enriches our understanding of output dynamics, employment flows, and aggregate economic performance. These trading mechanics generate endogenous price dispersion and price dynamics—prices depend on whether the buyer or the seller is the recent entrant, and on how many viable traders were waiting for the entrant, which varies over time. The stock-flow structure has provided insights about housing, temporary employment, and taxicab markets.

Article

Structural Vector Autoregressive Models  

Luca Gambetti

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.

Article

Tariffs and the Macroeconomy  

Xiangtao Meng, Katheryn N. Russ, and Sanjay R. Singh

For hundreds of years, policymakers and academics have puzzled over how to add up the effects of trade and trade barriers on economic activity. The literature is vast. Trade theory generally focuses on the question of whether trade or trade barriers, like tariffs, make people and firms better off using models of the real economy operating at full employment and a net-zero trade balance. They yield powerful fundamental intuition but are not well equipped to address issues such as capital accumulation, the role of exchange rate depreciation, monetary policy, intertemporal optimization by consumers, or current account deficits, which permeate policy debates over tariffs. The literature on open-economy macroeconomics provides additional tools to address some of these issues, but neither literature has yet been able to answer definitively the question of what impact tariffs have on infant industries, current account deficits, unemployment, or inequality, which remain open empirical questions. Trade economists have only begun to understand how multiproduct retailers affect who ultimately pays tariffs and still are struggling to meaningfully model unemployment in a tractable way conducive to fast or uniform application to policy analysis, while macro approaches overlook sectoral complexity. The field’s understanding of the importance of endogenous capital investment is growing, but it has not internalized the importance of the same intertemporal trade-offs between savings and consumption for assessing the distributional impacts of trade on households. Dispersion across assessments of the impacts of the U.S.–China trade war illustrates the frontiers that economists face assessing the macroeconomic impacts of tariffs.

Article

Time Consistent Policies and Quasi-Hyperbolic Discounting  

Łukasz Balbus, Kevin Reffett, and Łukasz Woźny

In dynamic choice models, dynamic inconsistency of preferences is a situation in which a decision-maker’s preferences change over time. Optimal plans under such preferences are time inconsistent if a decision-maker has no incentive to follow in the future the (previously chosen) optimal plan. A typical example of dynamic inconsistency is the case of present bias preferences, where there is a repeated preference toward smaller present rewards versus larger future rewards. The study of dynamic choice of decision-makers who possess dynamically inconsistent preferences has long been the focal point of much work in behavioral economics. Experimental and empirical literatures both point to the importance of various forms of present-bias. The canonical model of dynamically inconsistent preferences exhibiting present-bias is a model of quasi-hyperbolic discounting. A quasi-hyperbolic discounting model is a dynamic choice model, in which the standard exponential discounting is modified by adding an impatience parameter that additionally discounts the immediately succeeding period. A central problem with the analytical study of decision-makers who possess dynamically inconsistent preferences is how to model their choices in sequential decision problems. One general answer to this problem is to characterize and compute (if they exist) constrained optimal plans that are optimal among the set of time consistent sequential plans. Time consistent plans are those among the set of feasible plans that will actually be followed, or not reoptimized, by agents whose preferences change over time. These are called time consistent plans or policies (TCPs). Many results of the existence, uniqueness, and characterization of stationary, or time invariant, TCPs in a class of consumption-savings problems with quasi-hyperbolic discounting, as well as provide some discussion of how to compute TCPs in some extensions of the model are presented, and the role of the generalized Bellman equation operator approach is central. This approach provides sufficient conditions for the existence of time consistent solutions and facilitates their computation. Importantly, the generalized Bellman approach can also be related to a common first-order approach in the literature known as the generalized Euler equation approach. By constructing sufficient conditions for continuously differentiable TCPs on the primitives of the model, sufficient conditions under which a generalized Euler equation approach is valid can be provided. There are other important facets of TCP, including sufficient conditions for the existence of monotone comparative statics in interesting parameters of the decision environment, as well as generalizations of the generalized Bellman approach to allow for unbounded returns and general certainty equivalents. In addition, the case of multidimensional state space, as well as a general self generation method for characterizing nonstationary TCPs must be considered as well.