Over time, the reference cycle of an economy is determined by a sequence of non-observable business cycle turning points involving a partition of the time calendar into non-overlapping episodes of expansions and recessions. Dating these turning points helps develop economic analysis and is useful for economic agents, whether policymakers, investors, or academics.
Aiming to be transparent and reproducible, determining the reference cycle with statistical frameworks that automatically date turning points from a set of coincident economic indicators has been the source of remarkable advances in this research context. These methods can be classified into different broad sets of categories. Depending on the assumptions made in the data-generating process, the dating methods are parametric and non-parametric. There are two main approaches to dealing with multivariate data sets: average then date and date then average. The former approach focuses on computing a reference series of the aggregate economy, usually by averaging the indicators across the cross-sectional dimension. Then, the global turning points are dated on the aggregate indicator using one of the business cycle dating models available in the literature. The latter approach consists of dating the peaks and troughs in a set of coincident business cycle indicators separately, assessing the reference cycle itself in those periods where the individual turning points cohere.
In the early 21st century, literature has shown that future work on dating the reference cycle will require dealing with a set of challenges. First, new tools have become available, which, being increasingly sophisticated, may enlarge the existing academic–practitioner gap. Compiling the codes that implement the dating methods and facilitating their practical implementation may reduce this gap. Second, the pandemic shock hitting worldwide economies led most industrialized countries to record 2020’s most significant fall and the largest rebound in national economic indicators since records began. Under these influential observations, the outcomes of dating methods could misrepresent the actual reference cycle, especially in the case of parametric approaches. Exploring non-parametric approaches, big data sources, and the classification ability offered by machine learning methods could help improve dating analyses’ performance.
Article
Econometric Methods for Business Cycle Dating
Máximo Camacho Alonso and Lola Gadea
Article
Financial Frictions in Macroeconomic Models
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
Article
Measurement Error: A Primer for Macroeconomists
Simon van Norden
Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics.
Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities.
Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.