1-11 of 11 Results

  • Keywords: business cycle x
Clear all


Financial Frictions in Macroeconomic Models  

Alfred Duncan and Charles Nolan

In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.


Measurement Error: A Primer for Macroeconomists  

Simon van Norden

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.


Econometric Methods for Business Cycle Dating  

Máximo Camacho Alonso and Lola Gadea

Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics. Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere. In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.


Business Cycles and Apprenticeships  

Samuel Muehlemann and Stefan Wolter

The economic reasons why firms engage in apprenticeship training are twofold. First, apprenticeship training is a potentially cost-effective strategy for filling a firm’s future vacancies, particularly if skilled labor on the external labor market is scarce. Second, apprentices can be cost-effective substitutes for other types of labor in the current production process. As current and expected business and labor market conditions determine a firm’s expected work volume and thus its future demand for skilled labor, they are potentially important drivers of a firm’s training decisions. Empirical studies have found that the business cycle affects apprenticeship markets. However, while the economic magnitude of these effects is moderate on average, there is substantial heterogeneity across countries, even among those that at first sight seem very similar in terms of their apprenticeship systems. Moreover, identification of business cycle effects is a difficult task. First, statistics on apprenticeship markets are often less developed than labor market statistics, making empirical analyses of demand and supply impossible in many cases. In particular, data about unfilled apprenticeship vacancies and unsuccessful applicants are paramount for assessing potential market failures and analyzing the extent to which business cycle fluctuations may amplify imbalances in apprenticeship markets. Second, the intensity of business cycle effects on apprenticeship markets is not completely exogenous, as governments typically undertake a variety of measures, which differ across countries and may change over time, to reduce the adverse effects of economic downturns on apprenticeship markets. During the economic crisis related to the COVID-19 global pandemic, many countries took unprecedented actions to support their economies in general and reacted swiftly to introduce measures such as the provision of financial subsidies for training firms or the establishment of apprenticeship task forces. As statistics on apprenticeship markets improve over time, such heterogeneity in policy measures should be exploited to improve our understanding of the business cycle and its relationship with apprenticeships.


The Business Cycle and Health  

Cristina Bellés-Obrero and Judit Vall Castelló

The impact of macroeconomic fluctuations on health and mortality rates has been a highly studied topic in the field of economics. Many studies, using fixed-effects models, find that mortality is procyclical in many countries, such as the United States, Germany, Spain, France, Pacific-Asian nations, Mexico, and Canada. On the other hand, a small number of studies find that mortality decreases during economic expansion. Differences in the social insurance systems and labor market institutions across countries may explain some of the disparities found in the literature. Studies examining the effects of more recent recessions are less conclusive, finding mortality to be less procyclical, or even countercyclical. This new finding could be explained by changes over time in the mechanisms behind the association between business cycle conditions and mortality. A related strand of the literature has focused on understanding the effect of economic fluctuations on infant health at birth and/or child mortality. While infant mortality is found to be procyclical in countries like the United States and Spain, the opposite is found in developing countries. Even though the association between business cycle conditions and mortality has been extensively documented, a much stronger effort is needed to understand the mechanisms behind the relationship between business cycle conditions and health. Many studies have examined the association between macroeconomic fluctuations and smoking, drinking, weight disorders, eating habits, and physical activity, although results are rather mixed. The only well-established finding is that mental health deteriorates during economic slowdowns. An important challenge is the fact that the comparison of the main results across studies proves to be complicated due to the variety of empirical methods and time spans used. Furthermore, estimates have been found to be sensitive to the use of different levels of geographic aggregation, model specifications, and proxies of macroeconomic fluctuations.


Macroeconomic Aspects of Housing  

Charles Ka Yui Leung and Cho Yiu Joe Ng

This article summarizes research on the macroeconomic aspects of the housing market. In terms of the macroeconomic stylized facts, this article demonstrates that with respect to business cycle frequency, there was a general decrease in the association between macroeconomic variables (MV), such as the real GDP and inflation rate, and housing market variables (HMV), such as the housing price and the vacancy rate, following the global financial crisis (GFC). However, there are macro-finance variables, such as different interest rate spreads, that exhibited a strong association with the HMV following the GFC. For the medium-term business cycle frequency, some but not all patterns prevail. These “new stylized facts” suggest that a reconsideration and refinement of existing “macro-housing” theories would be appropriate. This article also provides a review of the corresponding academic literature, which may enhance our understanding of the evolving macro-housing–finance linkage.


Structural Vector Autoregressive Models  

Luca Gambetti

Structural vector autoregressions (SVARs) represent a prominent class of time series models used for macroeconomic analysis. The model consists of a set of multivariate linear autoregressive equations characterizing the joint dynamics of economic variables. The residuals of these equations are combinations of the underlying structural economic shocks, assumed to be orthogonal to each other. Using a minimal set of restrictions, these relations can be estimated—the so-called shock identification—and the variables can be expressed as linear functions of current and past structural shocks. The coefficients of these equations, called impulse response functions, represent the dynamic response of model variables to shocks. Several ways of identifying structural shocks have been proposed in the literature: short-run restrictions, long-run restrictions, and sign restrictions, to mention a few. SVAR models have been extensively employed to study the transmission mechanisms of macroeconomic shocks and test economic theories. Special attention has been paid to monetary and fiscal policy shocks as well as other nonpolicy shocks like technology and financial shocks. In recent years, many advances have been made both in terms of theory and empirical strategies. Several works have contributed to extend the standard model in order to incorporate new features like large information sets, nonlinearities, and time-varying coefficients. New strategies to identify structural shocks have been designed, and new methods to do inference have been introduced.


Capital Controls: A Survey of the New Literature  

Alessandro Rebucci and Chang Ma

This paper reviews selected post–Global Financial Crisis theoretical and empirical contributions on capital controls and identifies three theoretical motives for the use of capital controls: pecuniary externalities in models of financial crises, aggregate demand externalities in New Keynesian models of the business cycle, and terms of trade manipulation in open-economy models with pricing power. Pecuniary and demand externalities offer the most compelling case for the adoption of capital controls, but macroprudential policy can also address the same distortions. So capital controls generally are not the only instrument that can do the job. If evaluated through the lenses of the new theories, the empirical evidence reviewed suggests that capital controls can have the intended effects, even though the extant literature is inconclusive as to whether the effects documented amount to a net gain or loss in welfare terms. Terms of trade manipulation also provides a clear-cut theoretical case for the use of capital controls, but this motive is less compelling because of the spillover and coordination issues inherent in the use of control on capital flows for this purpose. Perhaps not surprisingly, only a handful of countries have used capital controls in a countercyclical manner, while many adopted macroprudential policies. This suggests that capital control policy might entail additional costs other than increased financing costs, such as signaling the bad quality of future policies, leakages, and spillovers.


Sparse Grids for Dynamic Economic Models  

Johannes Brumm, Christopher Krause, Andreas Schaab, and Simon Scheidegger

Solving dynamic economic models that capture salient real-world heterogeneity and nonlinearity requires the approximation of high-dimensional functions. As their dimensionality increases, compute time and storage requirements grow exponentially. Sparse grids alleviate this curse of dimensionality by substantially reducing the number of interpolation nodes, that is, grid points needed to achieve a desired level of accuracy. The construction principle of sparse grids is to extend univariate interpolation formulae to the multivariate case by choosing linear combinations of tensor products in a way that reduces the number of grid points by orders of magnitude relative to a full tensor-product grid and doing so without substantially increasing interpolation errors. The most popular versions of sparse grids used in economics are (dimension-adaptive) Smolyak sparse grids that use global polynomial basis functions, and (spatially adaptive) sparse grids with local basis functions. The former can economize on the number of interpolation nodes for sufficiently smooth functions, while the latter can also handle non-smooth functions with locally distinct behavior such as kinks. In economics, sparse grids are particularly useful for interpolating the policy and value functions of dynamic models with state spaces between two and several dozen dimensions, depending on the application. In discrete-time models, sparse grid interpolation can be embedded in standard time iteration or value function iteration algorithms. In continuous-time models, sparse grids can be embedded in finite-difference methods for solving partial differential equations like Hamilton-Jacobi-Bellman equations. In both cases, local adaptivity, as well as spatial adaptivity, can add a second layer of sparsity to the fundamental sparse-grid construction. Beyond these salient use-cases in economics, sparse grids can also accelerate other computational tasks that arise in high-dimensional settings, including regression, classification, density estimation, quadrature, and uncertainty quantification.


Markov Switching  

Yong Song and Tomasz Woźniak

Markov switching models are a family of models that introduces time variation in the parameters in the form of their state, or regime-specific values. This time variation is governed by a latent discrete-valued stochastic process with limited memory. More specifically, the current value of the state indicator is determined by the value of the state indicator from the previous period only implying the Markov property. A transition matrix characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period conditionally on the state in the current period. This setup decides on the two main advantages of the Markov switching models: the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities. The most commonly applied models from this family are those that presume a finite number of regimes and the exogeneity of the Markov process, which is defined as its independence from the model’s unpredictable innovations. In many such applications, the desired properties of the Markov switching model have been obtained either by imposing appropriate restrictions on transition probabilities or by introducing the time dependence of these probabilities determined by explanatory variables or functions of the state indicator. One of the extensions of this basic specification includes infinite hidden Markov models that provide great flexibility and excellent forecasting performance by allowing the number of states to go to infinity. Another extension, the endogenous Markov switching model, explicitly relates the state indicator to the model’s innovations, making it more interpretable and offering promising avenues for development.


Agent-Level Adaptive Learning  

George W. Evans and Bruce McGough

Adaptive learning is a boundedly rational alternative to rational expectations that is increasingly used in macroeconomics, monetary economics, and financial economics. The agent-level approach can be used to provide microfoundations for adaptive learning in macroeconomics. Two central issues of bounded rationality are simultaneously addressed at the agent level: replacing fully rational expectations of key variables with econometric forecasts and boundedly optimal decisions-making based on those forecasts. The real business cycle (RBC) model provides a useful laboratory for exhibiting alternative implementations of the agent-level approach. Specific implementations include shadow-price learning (and its anticipated-utility counterpart, iterated shadow-price learning), Euler-equation learning, and long-horizon learning. For each implementation the path of the economy is obtained by aggregating the boundedly rational agent-level decisions. A linearized RBC can be used to illustrate the effects of fiscal policy. For example, simulations can be used to illustrate the impact of a permanent increase in government spending and highlight the similarities and differences among the various implements of agent-level learning. These results also can be used to expose the differences among agent-level learning, reduced-form learning, and rational expectations. The different implementations of agent-level adaptive learning have differing advantages. A major advantage of shadow-price learning is its ease of implementation within the nonlinear RBC model. Compared to reduced-form learning, which is widely use because of its ease of application, agent-level learning both provides microfoundations, which ensure robustness to the Lucas critique, and provides the natural framework for applications of adaptive learning in heterogeneous-agent models.