You are looking at 1-20 of 194 articles
Zoë Fannon and Bent Nielsen
Outcomes of interest often depend on the age, period, or cohort of the individual observed, where cohort and age add up to period. An example is consumption: consumption patterns change over the lifecycle (age) but are also affected by the availability of products at different times (period) and by birth-cohort-specific habits and preferences (cohort). Age-period-cohort (APC) models are additive models where the predictor is a sum of three time effects, which are functions of age, period, and cohort, respectively. Variations of these models are available for data aggregated over age, period, and cohort, and for data drawn from repeated cross-sections, where the time effects can be combined with individual covariates.
The age, period, and cohort time effects are intertwined. Inclusion of an indicator variable for each level of age, period, and cohort results in perfect collinearity, which is referred to as “the age-period-cohort identification problem.” Estimation can be done by dropping some indicator variables. However, dropping indicators has adverse consequences such as the time effects are not individually interpretable and inference becomes complicated. These consequences are avoided by instead decomposing the time effects into linear and non-linear components and noting that the identification problem relates to the linear components, whereas the non-linear components are identifiable. Thus, confusion is avoided by keeping the identifiable non-linear components of the time effects and the unidentifiable linear components apart. A variety of hypotheses of practical interest can be expressed in terms of the non-linear components.
Martin Karlsson, Tor Iversen, and Henning Øien
An open issue in the economics literature is whether healthcare expenditure (HCE) is so concentrated in the last years before death that the age profiles in spending will change when longevity increases. The seminal article “aging of Population and HealthCare Expenditure: A Red Herring?” by Zweifel and colleagues argued that that age is a distraction in explaining growth in HCE. The argument was based on the observation that age did not predict HCE after controlling for time to death (TTD). The authors were soon criticized for the use of a Heckman selection model in this context. Most of the recent literature makes use of variants of a two-part model and seems to give some role to age as well in the explanation. Age seems to matter more for long-term care expenditures (LTCE) than for acute hospital care. When disability is accounted for, the effects of age and TTD diminish. Not many articles validate their approach by comparing properties of different estimation models. In order to evaluate popular models used in the literature and to gain an understanding of the divergent results of previous studies, an empirical analysis based on a claims data set from Germany is conducted. This analysis generates a number of useful insights. There is a significant age gradient in HCE, most for LTCE, and costs of dying are substantial. These “costs of dying” have, however, a limited impact on the age gradient in HCE. These findings are interpreted as evidence against the red herring hypothesis as initially stated. The results indicate that the choice of estimation method makes little difference and if they differ, ordinary least squares regression tends to perform better than the alternatives. When validating the methods out of sample and out of period, there is no evidence that including TTD leads to better predictions of aggregate future HCE. It appears that the literature might benefit from focusing on the predictive power of the estimators instead of their actual fit to the data within the sample.
Anthropometrics is a research program that explores the extent to which economic processes affect human biological processes using height and weight as markers. This agenda differs from health economics in the sense that instead of studying diseases or longevity, macro manifestations of well-being, it focuses on cellular-level processes that determine the extent to which the organism thrives in its socio-economic and epidemiological environment. Thus, anthropometric indicators are used as a proxy measure for the biological standard of living as complements to conventional measures based on monetary units.
Using physical stature as a marker, we enabled the profession to learn about the well-being of children and youth for whom market-generated monetary data are not abundant even in contemporary societies. It is now clear that economic transformations such as the onset of the Industrial Revolution and modern economic growth were accompanied by negative externalities that were hitherto unknown. Moreover, there is plenty of evidence to indicate that the Welfare States of Western and Northern Europe take better care of the biological needs of their citizens than the market-oriented health-care system of the United States.
Obesity has reached pandemic proportions in the United States affecting 40% of the population. It is fostered by a sedentary and harried lifestyle, by the diminution in self-control, the spread of labor-saving technologies, and the rise of instant gratification characteristic of post-industrial society. The spread of television and a fast-food culture in the 1950s were watershed developments in this regard that accelerated the process. Obesity poses a serious health risk including heart disease, stroke, diabetes, and some types of cancer and its cost reaches $150 billion per annum in the United States or about $1,400 per capita. We conclude that the economy influences not only mortality and health but reaches bone-deep into the cellular level of the human organism. In other words, the economy is inextricably intertwined with human biological processes.
“Antitrust” or “competition law,” a set of policies now existing in most market economies, largely consists of two or three specific rules applied in more or less the same way in most nations. It prohibits (1) multilateral agreements, (2) unilateral conduct, and (3) mergers or acquisitions, whenever any of them are judged to interfere unduly with the functioning of healthy markets. Most jurisdictions now apply or purport to apply these rules in the service of some notion of economic “efficiency,” more or less as defined in contemporary microeconomic theory.
The law has ancient roots, however, and over time it has varied a great deal in its details. Moreover, even as to its modern form, the policy and its goals remain controversial. In some sense most modern controversy arises from or is in reaction to the major intellectual reconceptualization of the law and its purposes that began in the 1960s. Specifically, academic critics in the United States urged revision of the law’s goals, such that it should serve only a narrowly defined microeconomic goal of allocational efficiency, whereas it had traditionally also sought to prevent accumulation of political power and to protect small firms, entrepreneurs, and individual liberty. While those critics enjoyed significant success in the United States, and to a somewhat lesser degree in Europe and elsewhere, the results remain contested. Specific disputes continue over the law’s general purpose, whether it poses net benefits, how a series of specific doctrines should be fashioned, how it should be enforced, and whether it really is appropriate for developing and small-market economies.
Andrea Gabrio, Gianluca Baio, and Andrea Manca
The evidence produced by healthcare economic evaluation studies is a key component of any Health Technology Assessment (HTA) process designed to inform resource allocation decisions in a budget-limited context. To improve the quality (and harmonize the generation process) of such evidence, many HTA agencies have established methodological guidelines describing the normative framework inspiring their decision-making process. The information requirements that economic evaluation analyses for HTA must satisfy typically involve the use of complex quantitative syntheses of multiple available datasets, handling mixtures of aggregate and patient-level information, and the use of sophisticated statistical models for the analysis of non-Normal data (e.g., time-to-event, quality of life and costs). Much of the recent methodological research in economic evaluation for healthcare has developed in response to these needs, in terms of sound statistical decision-theoretic foundations, and is increasingly being formulated within a Bayesian paradigm. The rationale for this preference lies in the fact that by taking a probabilistic approach, based on decision rules and available information, a Bayesian economic evaluation study can explicitly account for relevant sources of uncertainty in the decision process and produce information to identify an “optimal” course of actions. Moreover, the Bayesian approach naturally allows the incorporation of an element of judgment or evidence from different sources (e.g., expert opinion or multiple studies) into the analysis. This is particularly important when, as often occurs in economic evaluation for HTA, the evidence base is sparse and requires some inevitable mathematical modeling to bridge the gaps in the available data. The availability of free and open source software in the last two decades has greatly reduced the computational costs and facilitated the application of Bayesian methods and has the potential to improve the work of modelers and regulators alike, thus advancing the fields of economic evaluation of healthcare interventions. This chapter provides an overview of the areas where Bayesian methods have contributed to the address the methodological needs that stem from the normative framework adopted by a number of HTA agencies.
Silvia Miranda-Agrippino and Giovanni Ricco
Bayesian vector autoregressions (BVARs) are standard multivariate autoregressive models routinely used in empirical macroeconomics and finance for structural analysis, forecasting, and scenario analysis in an ever-growing number of applications.
A preeminent field of application of BVARs is forecasting. BVARs with informative priors have often proved to be superior tools compared to standard frequentist/flat-prior VARs. In fact, VARs are highly parametrized autoregressive models, whose number of parameters grows with the square of the number of variables times the number of lags included. Prior information, in the form of prior distributions on the model parameters, helps in forming sharper posterior distributions of parameters, conditional on an observed sample. Hence, BVARs can be effective in reducing parameters uncertainty and improving forecast accuracy compared to standard frequentist/flat-prior VARs.
This feature in particular has favored the use of Bayesian techniques to address “big data” problems, in what is arguably one of the most active frontiers in the BVAR literature. Large-information BVARs have in fact proven to be valuable tools to handle empirical analysis in data-rich environments.
BVARs are also routinely employed to produce conditional forecasts and scenario analysis. Of particular interest for policy institutions, these applications permit evaluating “counterfactual” time evolution of the variables of interests conditional on a pre-determined path for some other variables, such as the path of interest rates over a certain horizon.
The “structural interpretation” of estimated VARs as the data generating process of the observed data requires the adoption of strict “identifying restrictions.” From a Bayesian perspective, such restrictions can be seen as dogmatic prior beliefs about some regions of the parameter space that determine the contemporaneous interactions among variables and for which the data are uninformative. More generally, Bayesian techniques offer a framework for structural analysis through priors that incorporate uncertainty about the identifying assumptions themselves.
Silvia Miranda-Agrippino and Giovanni Ricco
Vector autoregressions (VARs) are linear multivariate time-series models able to capture the joint dynamics of multiple time series. Bayesian inference treats the VAR parameters as random variables, and it provides a framework to estimate “posterior” probability distribution of the location of the model parameters by combining information provided by a sample of observed data and prior information derived from a variety of sources, such as other macro or micro datasets, theoretical models, other macroeconomic phenomena, or introspection.
In empirical work in economics and finance, informative prior probability distributions are often adopted. These are intended to summarize stylized representations of the data generating process. For example, “Minnesota” priors, one of the most commonly adopted macroeconomic priors for the VAR coefficients, express the belief that an independent random-walk model for each variable in the system is a reasonable “center” for the beliefs about their time-series behavior. Other commonly adopted priors, the “single-unit-root” and the “sum-of-coefficients” priors are used to enforce beliefs about relations among the VAR coefficients, such as for example the existence of co-integrating relationships among variables, or of independent unit-roots.
Priors for macroeconomic variables are often adopted as “conjugate prior distributions”—that is, distributions that yields a posterior distribution in the same family as the prior p.d.f.—in the form of Normal-Inverse-Wishart distributions that are conjugate prior for the likelihood of a VAR with normally distributed disturbances. Conjugate priors allow direct sampling from the posterior distribution and fast estimation. When this is not possible, numerical techniques such as Gibbs and Metropolis-Hastings sampling algorithms are adopted.
Bayesian techniques allow for the estimation of an ever-expanding class of sophisticated autoregressive models that includes conventional fixed-parameters VAR models; Large VARs incorporating hundreds of variables; Panel VARs, that permit analyzing the joint dynamics of multiple time series of heterogeneous and interacting units. And VAR models that relax the assumption of fixed coefficients, such as time-varying parameters, threshold, and Markov-switching VARs.
Henrik Cronqvist and Désirée-Jessica Pély
Corporate finance is about understanding the determinants and consequences of the investment and financing policies of corporations. In a standard neoclassical profit maximization framework, rational agents, that is, managers, make corporate finance decisions on behalf of rational principals, that is, shareholders. Over the past two decades, there has been a rapidly growing interest in augmenting standard finance frameworks with novel insights from cognitive psychology, and more recently, social psychology and sociology. This emerging subfield in finance research has been dubbed behavioral corporate finance, which differentiates between rational and behavioral agents and principals.
The presence of behavioral shareholders, that is, principals, may lead to market timing and catering behavior by rational managers. Such managers will opportunistically time the market and exploit mispricing by investing capital, issuing securities, or borrowing debt when costs of capital are low and shunning equity, divesting assets, repurchasing securities, and paying back debt when costs of capital are high. Rational managers will also incite mispricing, for example, cater to non-standard preferences of shareholders through earnings management or by transitioning their firms into an in-fashion category to boost the stock’s price.
The interaction of behavioral managers, that is, agents, with rational shareholders can also lead to distortions in corporate decision making. For example, managers may perceive fundamental values differently and systematically diverge from optimal decisions. Several personal traits, for example, overconfidence or narcissism, and environmental factors, for example, fatal natural disasters, shape behavioral managers’ preferences and beliefs, short or long term. These factors may bias the value perception by managers and thus lead to inferior decision making.
An extension of behavioral corporate finance is social corporate finance, where agents and principals do not make decisions in a vacuum but rather are embedded in a dynamic social environment. Since managers and shareholders take a social position within and across markets, social psychology and sociology can be useful to understand how social traits, states, and activities shape corporate decision making if an individual’s psychology is not directly observable.
Matteo M. Galizzi and Daniel Wiesen
The state-of-the-art literature at the interface between experimental and behavioral economics and health economics is reviewed by identifying and discussing 10 areas of potential debate about behavioral experiments in health. By doing so, the different streams and areas of application of the growing field of behavioral experiments in health are reviewed, by discussing which significant questions remain to be discussed, and by highlighting the rationale and the scope for the further development of behavioral experiments in health in the years to come.
Nikolaus Robalino and Arthur Robson
Modern economic theory rests on the basic assumption that agents’ choices are guided by preferences. The question of where such preferences might have come from has traditionally been ignored or viewed agnostically. The biological approach to economic behavior addresses the issue of the origins of economic preferences explicitly. This approach assumes that economic preferences are shaped by the forces of natural selection. For example, an important theoretical insight delivered thus far by this approach is that individuals ought to be more risk averse to aggregate than to idiosyncratic risk. Additionally the approach has delivered an evolutionary basis for hedonic and adaptive utility and an evolutionary rationale for “theory of mind.” Related empirical work has studied the evolution of time preferences, loss aversion, and explored the deep evolutionary determinants of long-run economic development.
Graciela Laura Kaminsky
This article examines the new trends in research on capital flows fueled by the 2007–2009 Global Crisis. Previous studies on capital flows focused on current account imbalances and net capital flows. The Global Crisis changed that. The onset of this crisis was preceded by a dramatic increase in gross financial flows while net capital flows remained mostly subdued. The attention in academia zoomed in on gross inflows and outflows with special attention to cross-border banking flows before the crisis erupted and the shift towards corporate bond issuance in its aftermath. The boom and bust in capital flows around the Global Crisis also stimulated a new area of research: capturing the “global factor.” This research adopts two different approaches. The traditional literature on the push–pull factors, which before the crisis was mostly focused on monetary policy in the financial center as the “push factor,” started to explore what other factors contribute to the co-movement of capital flows as well as to amplify the role of monetary policy in the financial center on capital flows. This new research focuses on global banks’ leverage, risk appetite, and global uncertainty. Since the “global factor” is not known, a second branch of the literature has captured this factor indirectly using dynamic common factors extracted from actual capital flows or movements in asset prices.
Helmut Herwartz and Alexander Lange
Unlike traditional first order asymptotic approximations, the bootstrap is a simulation method to solve inferential issues in statistics and econometrics conditional on the available sample information (e.g. constructing confidence intervals, generating critical values for test statistics). Even though econometric theory yet provides sophisticated central limit theory covering various data characteristics, bootstrap approaches are of particular appeal if establishing asymptotic pivotalness of (econometric) diagnostics is infeasible or requires rather complex assessments of estimation uncertainty. Moreover, empirical macroeconomic analysis is typically constrained by short- to medium-sized time windows of sample information, and convergence of macroeconometric model estimates toward their asymptotic limits is often slow. Consistent bootstrap schemes have the potential to improve empirical significance levels in macroeconometric analysis and, moreover, could avoid explicit assessments of estimation uncertainty. In addition, as time-varying (co)variance structures and unmodeled serial correlation patterns are frequently diagnosed in macroeconometric analysis, more advanced bootstrap techniques (e.g., wild bootstrap, moving-block bootstrap) have been developed to account for nonpivotalness as a results of such data characteristics.
Cristina Bellés-Obrero and Judit Vall Castelló
The impact of macroeconomic fluctuations on health and mortality rates has been a highly studied topic in the field of economics. Many studies, using fixed-effects models, find that mortality is procyclical in many countries, such as the United States, Germany, Spain, France, Pacific-Asian nations, Mexico, and Canada. On the other hand, a small number of studies find that mortality decreases during economic expansion. Differences in the social insurance systems and labor market institutions across countries may explain some of the disparities found in the literature. Studies examining the effects of more recent recessions are less conclusive, finding mortality to be less procyclical, or even countercyclical. This new finding could be explained by changes over time in the mechanisms behind the association between business cycle conditions and mortality.
A related strand of the literature has focused on understanding the effect of economic fluctuations on infant health at birth and/or child mortality. While infant mortality is found to be procyclical in countries like the United States and Spain, the opposite is found in developing countries.
Even though the association between business cycle conditions and mortality has been extensively documented, a much stronger effort is needed to understand the mechanisms behind the relationship between business cycle conditions and health. Many studies have examined the association between macroeconomic fluctuations and smoking, drinking, weight disorders, eating habits, and physical activity, although results are rather mixed. The only well-established finding is that mental health deteriorates during economic slowdowns.
An important challenge is the fact that the comparison of the main results across studies proves to be complicated due to the variety of empirical methods and time spans used. Furthermore, estimates have been found to be sensitive to the use of different levels of geographic aggregation, model specifications, and proxies of macroeconomic fluctuations.
Alessandro Rebucci and Chang Ma
This paper reviews selected post–Global Financial Crisis theoretical and empirical contributions on capital controls and identifies three theoretical motives for the use of capital controls: pecuniary externalities in models of financial crises, aggregate demand externalities in New Keynesian models of the business cycle, and terms of trade manipulation in open-economy models with pricing power. Pecuniary and demand externalities offer the most compelling case for the adoption of capital controls, but macroprudential policy can also address the same distortions. So capital controls generally are not the only instrument that can do the job. If evaluated through the lenses of the new theories, the empirical evidence reviewed suggests that capital controls can have the intended effects, even though the extant literature is inconclusive as to whether the effects documented amount to a net gain or loss in welfare terms. Terms of trade manipulation also provides a clear-cut theoretical case for the use of capital controls, but this motive is less compelling because of the spillover and coordination issues inherent in the use of control on capital flows for this purpose. Perhaps not surprisingly, only a handful of countries have used capital controls in a countercyclical manner, while many adopted macroprudential policies. This suggests that capital control policy might entail additional costs other than increased financing costs, such as signaling the bad quality of future policies, leakages, and spillovers.
Diane McIntyre, Amarech G. Obse, Edwine W. Barasa, and John E. Ataguba
Within the context of the Sustainable Development Goals, it is important to critically review research on healthcare financing in sub-Saharan Africa (SSA) from the perspective of the universal health coverage (UHC) goals of financial protection and access to quality health services for all. There is a concerning reliance on direct out-of-pocket payments in many SSA countries, accounting for an average of 36% of current health expenditure compared to only 22% in the rest of the world. Contributions to health insurance schemes, whether voluntary or mandatory, contribute a small share of current health expenditure. While domestic mandatory prepayment mechanisms (tax and mandatory insurance) is the next largest category of healthcare financing in SSA (35%), a relatively large share of funding in SSA (14% compared to <1% in the rest of the world) is attributable to, sometimes unstable, external funding sources. There is a growing recognition of the need to reduce out-of-pocket payments and increase domestic mandatory prepayment financing to move towards UHC. Many SSA countries have declared a preference for achieving this through contributory health insurance schemes, particularly for formal sector workers, with service entitlements tied to contributions. Policy debates about whether a contributory approach is the most efficient, equitable and sustainable means of financing progress to UHC are emotive and infused with “conventional wisdom.” A range of research questions must be addressed to provide a more comprehensive empirical evidence base for these debates and to support progress to UHC.
Since the 1980s policymakers have identified a wide range of policy interventions to improve hospital performance. Some of these have been initiated at the level of government, whereas others have taken the form of decisions made by individual hospitals but have been guided by regulatory or financial incentives. Studies investigating the impact that some of the most important of these interventions have had on hospital performance can be grouped into four different research streams. Among the research streams, the strongest evidence exists for the effects of privatization. Studies on this topic use longitudinal designs with control groups and have found robust increases in efficiency and financial performance. Evidence on the entry of hospitals into health systems and the effects of this on efficiency is similarly strong. Although the other three streams of research also contain well-conducted studies with valuable findings, they are predominantly cross-sectional in design and therefore cannot establish causation. While the effects of introducing DRG-based hospital payments and of specialization are largely unclear, vertical and horizontal cooperation probably have a positive effect on efficiency and financial performance. Lastly, the drivers of improved efficiency or financial performance are very different depending on the reform or intervention being investigated; however, reductions in the number of staff and improved bargaining power in purchasing stand out as being of particular importance.
Several promising avenues for future investigation are identified. One of these is situated within a new area of research examining the link between changes in the prices of treatments and hospitals’ responses. As there is evidence of unintended effects, future studies should attempt to distinguish between changes in hospitals’ responses at the intensive margin (e.g., upcoding) versus the extensive margin (e.g., increase in admissions). When looking at the effects of entering into a health system and of privatizations, there is still considerable need for research. With privatizations, in particular, the underlying processes are not yet fully understood, and the potential trade-offs between increases in performance and changes in the quality of care have not been sufficiently examined. Lastly, there is substantial need for further papers in the areas of multi-institutional arrangements and cooperation, as well as specialization. In both research streams, natural experiments carried out using program evaluation design are lacking. One of the main challenges here, however, is that cooperation and specialization cannot be directly observed but rather must be constructed based on survey or administrative data.
Lawrence J. Lau
Chinese real gross domestic product (GDP) grew from US$369 billion in 1978 to US$12.7 trillion in 2017 (in 2017 prices and exchange rate), at almost 10% per annum, making the country the second largest economy in the world, just behind the United States. During the same period, Chinese real GDP per capita grew from US$383 to US$9,137 (2017 prices), at 8.1% per annum.
Chinese economic reform, which began in 1978, consists of two elements—introduction of free markets for goods and services, coupled with conditional producer autonomy, and opening to international trade and direct investment with the rest of the world. In its transition from a centrally planned to a market economy, China employed a “dual-track” approach—with the pre-existing mandatory central plan continuing in force and the establishment of free markets in parallel. In its opening to the world, China set a competitive exchange rate for its currency, made it current account convertible in 1994, and acceded to the World Trade Organisation (WTO) in 2001. In 2005, China became the second largest trading nation in the world, after the United States. Other Chinese policies complementary to its economic reform include the pre-existing low non-agricultural wage and the limit of one-child per couple, introduced in 1979 and phased out in 2016.
The high rate of growth of Chinese real output since 1978 can be largely explained by the high rates of growth of inputs, but there were also other factors at work. Chinese economic growth since 1978 may be attributed as follows: (a) the elimination of the initial economic inefficiency (12.7%), (b) the growth of tangible capital (55.7%) and labor (9.7%) inputs, (c) technical progress (or growth of total factor productivity (TFP)) (8%), and (d) economies of scale (14%).
The Chinese economy also shares many commonalities with other East Asian economies in terms of their development experiences: the lack of natural endowments, the initial conditions (the low real GDP per capita and the existence of surplus agricultural labor), the cultural characteristics (thrift, industry, and high value for education), the economic policies (competitive exchange rate, export promotion, investment in basic infrastructure, and maintenance of macroeconomic stability), and the consistency, predictability, and stability resulting from continuous one-party rule.
In many countries of the world, consumers choose their health insurance coverage from a large menu of often complex options supplied by private insurance companies. Economic benefits of the wide choice of health insurance options depend on the extent to which the consumers are active, well informed, and sophisticated decision makers capable of choosing plans that are well-suited to their individual circumstances.
There are many possible ways how consumers’ actual decision making in the health insurance domain can depart from the standard model of health insurance demand of a rational risk-averse consumer. For example, consumers can have inaccurate subjective beliefs about characteristics of alternative plans in their choice set or about the distribution of health expenditure risk because of cognitive or informational constraints; or they can prefer to rely on heuristics when the plan choice problem features a large number of options with complex cost-sharing design.
The second decade of the 21st century has seen a burgeoning number of studies assessing the quality of consumer choices of health insurance, both in the lab and in the field, and financial and welfare consequences of poor choices in this context. These studies demonstrate that consumers often find it difficult to make efficient choices of private health insurance due to reasons such as inertia, misinformation, and the lack of basic insurance literacy. These findings challenge the conventional rationality assumptions of the standard economic model of insurance choice and call for policies that can enhance the quality of consumer choices in the health insurance domain.
In the wake of the 2008 financial collapse, clearinghouses have emerged as critical players in the implementation of the post-crisis regulatory reform agenda. Recognizing serious shortcomings in the design of the over-the-counter derivatives market for swaps, regulators are now relying on clearinghouses to cure these deficiencies by taking on a central role in mitigating the risks of these instruments. Rather than leave trading firms to manage the risks of transacting in swaps privately, as was largely the case prior to 2008, post-crisis regulation requires that clearinghouses assume responsibility for ensuring that trades are properly settled, reported to authorities, and supported by strong cushions of protective collateral. With clearinghouses effectively guaranteeing that the terms of a trade will be honored—even if one of the trading parties cannot perform—the market can operate with reduced levels of counterparty risk, opacity, and the threat of systemic collapse brought on by recklessness and over-complexity.
But despite their obvious benefit for regulators, clearinghouses also pose risks of their own. First, given their deepening significance for market stability, ensuring that clearinghouses themselves operate safely represents a matter of the highest policy priority. Yet overseeing clearinghouses is far from easy and understanding what works best to undergird their safe operation can be a contentious and uncertain matter. U.S. and EU authorities, for example, have diverged in important ways on what rules should apply to the workings of international clearinghouses. Secondly, clearinghouse oversight is critical because these institutions now warehouse enormous levels of counterparty risk. By promising counterparties across the market that their trades will settle as agreed, even if one or the other firm goes bust, clearinghouses assume almost inconceivably large and complicated risks within their institutions. For swaps in particular—whose obligations can last for months, or even years—the scale of these risks can be far more extensive than that entailed in a one-off sale or a stock or bond. In this way, commentators note that by becoming the go-to bulwark against risk-taking and its spread in the financial system, clearinghouses have themselves become the too-big-to-fail institution par excellence.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.