1-10 of 10 Results

  • Keywords: factor models x
Clear all

Article

Benjamin Helms and David Leblang

International migration is a multifaceted process with distinct stages and decision points. An initial decision to leave one’s country of birth may be made by the individual or the family unit, and this decision may reflect a desire to reconnect with friends and family who have already moved abroad, a need to diversify the family’s access to financial capital, a demand to increase wages, or a belief that conditions abroad will provide social and/or political benefits not available in the homeland. Once the individual has decided to move abroad, the next decision is the choice of destination. Standard explanations of destination choice have focused on the physical costs associated with moving—moving shorter distances is often less expensive than moving to a destination farther away; these explanations have recently been modified to include other social, political, familial, and cultural dimensions as part of the transaction cost associated with migrating. Arrival in a host country does not mean that an émigré’s relationship with their homeland is over. Migrant networks are an engine of global economic integration—expatriates help expand trade and investment flows, they transmit skills and knowledge back to their homelands, and they remit financial and human capital. Aware of the value of their external populations, home countries have developed a range of policies that enable them to “harness” their diasporas.

Article

Graciela Laura Kaminsky

This article examines the new trends in research on capital flows fueled by the 2007–2009 Global Crisis. Previous studies on capital flows focused on current account imbalances and net capital flows. The Global Crisis changed that. The onset of this crisis was preceded by a dramatic increase in gross financial flows while net capital flows remained mostly subdued. The attention in academia zoomed in on gross inflows and outflows with special attention to cross-border banking flows before the crisis erupted and the shift towards corporate bond issuance in its aftermath. The boom and bust in capital flows around the Global Crisis also stimulated a new area of research: capturing the “global factor.” This research adopts two different approaches. The traditional literature on the push–pull factors, which before the crisis was mostly focused on monetary policy in the financial center as the “push factor,” started to explore what other factors contribute to the co-movement of capital flows as well as to amplify the role of monetary policy in the financial center on capital flows. This new research focuses on global banks’ leverage, risk appetite, and global uncertainty. Since the “global factor” is not known, a second branch of the literature has captured this factor indirectly using dynamic common factors extracted from actual capital flows or movements in asset prices.

Article

Alessandro Casini and Pierre Perron

This article covers methodological issues related to estimation, testing, and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered including recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models, and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct use in practice as opposed to issues mostly of theoretical interest.

Article

Ryan E. Rhodes and Patrick Boudreau

The physical, psychological, and economic benefits of regular moderate-to-vigorous-intensity physical activity are well substantiated. Unfortunately, few people in developed countries engage in enough physical activity to reap these benefits. Thus, a strong theoretical understanding of what factors are associated with physical activity is warranted in order to create effective and targeted interventions. Social/ecological approaches to understanding physical activity demonstrate the breadth of correlates that encompass intra-individual, inter-individual, environmental, and policy-related variables in physical activity performance. One longstanding intrapersonal correlate of interest is the relationship between personality traits—enduring individual-level differences in tendencies to show consistent patterns of thoughts, feelings, and actions—and physical activity. Personality trait theories are broad in focus and differ in terms of proposed etiology, yet much of the recent research in physical activity has been with super traits in the five-factor model: neuroticism, extraversion, openness, agreeableness, and conscientiousness. Meta-analytic reviews suggest that conscientiousness and extraversion are positively associated with physical activity with some mixed evidence for a small negative relationship with neuroticism. The effect appears to be most pronounced with vigorous physical activities and less so with lower-intensity lifestyle activities and shows mixed evidence for whether proximal social cognitive variables (intention, self-efficacy) can mediate this relationship. More specific sub-traits show that facets of extraversion (excitement-seeking, activity) or conscientiousness (self-discipline, industriousness/ambition) have larger and more specific associations with particular types of physical activity or moderate key processes like the intention-behavior gap. Furthermore, personality appears to be linked to higher-intensity and adventure activities more than lower-intensity leisure physical activities. Contemporary longitudinal assessments of the bi-directionality of personality and physical activity have begun to advance our understanding of interconnectedness. Interventions that target personality traits to improve physical activity have been relatively understudied but hold some promise when used in tandem with larger theoretical approaches and behavioral change strategies.

Article

High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T , the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n , a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows: x i t = α i u t + β i u t − 1 + ξ i t , i = 1, … , n , t = 1, … , T , (*) where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common factor, and by the idiosyncratic component ξ i t . The common factor u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor u t affect the variable x i t dynamically, that is through the lag polynomial α i + β i L . Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity. Model ( ∗ ) , generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter. We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector ( x 1 t x 2 t ⋯ x n t ) . Dynamic principal components, based on the spectral density of the x ’s, are then used to construct estimators of the common factors. These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x ’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors. Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q -dimensional white noise, with q < n , that is, singular vector stochastic processes. The main features of this literature are described with some detail. Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.

Article

Christopher B. Mayhorn and Michael S. Wogalter

Warnings are risk communication messages that can appear in a variety of situations within the healthcare context. Potential target audiences for warnings can be very diverse and may include health professionals such as physicians or nurses as well as members of the public. In general, warnings serve three distinct purposes. First, warnings are used to improve health and safety by reducing the likelihood of events that might result in personal injury, disease, death, or property damage. Second, they are used to communicate important safety-related information. In general, warnings likely to be effective should include a description of the hazard, instructions on how to avoid the hazard, and an indication of the severity of consequences that might occur as a result of not complying with the warning. Third, warnings are used to promote safe behavior and reduce unsafe behavior. Various regulatory agencies within the United States and around the globe may take an active role in determining the content and formatting of warnings. The Communication-Human Information Processing (C-HIP) model was developed to describe the processes involved in how people interact with warnings and other information. This framework employs the basic stages of a simple communication model such that a warning message is sent from one entity (source) through some channel(s) to another (receiver). Once warning information is delivered to the receiver, processing may be initiated, and if not impeded, will continue through several stages including attention switch, attention maintenance, comprehension and memory, beliefs and attitudes, and motivation, possibly ending in compliance behavior. Examples of health-related warnings are presented to illustrate concepts. Methods for developing and evaluating warnings such as heuristic evaluation, iterative design and testing, comprehension, and response times are described.

Article

High-dimensional dynamic factor models have their origin in macroeconomics, more specifically in empirical research on business cycles. The central idea, going back to the work of Burns and Mitchell in the 1940s, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the 2000s on a model in which (a) both n, the number of variables in the data set, and T, the number of observations for each variable, may be large; (b) all the variables in the data set depend dynamically on a fixed, independent of n, number of common shocks, plus variable-specific, usually called idiosyncratic, components. The structure of the model can be exemplified as follows: (*) x i t = α i u t + β i u t − 1 + ξ i t , i = 1 , … , n , t = 1 , … , T , where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common shock, and by the idiosyncratic component ξ i t . The common shock u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Last, the variations of the common shock u t affect the variable x i t dynamically, that is, through the lag polynomial α i + β i L . Asymptotic results for high-dimensional factor models, consistency of estimators of the common shocks in particular, are obtained for both n and T tending to infinity. The time-domain approach to these factor models is based on the transformation of dynamic equations into static representations. For example, equation ( ∗ ) becomes x i t = α i F 1 t + β i F 2 t + ξ i t , F 1 t = u t , F 2 t = u t − 1 . Instead of the dynamic equation ( ∗ ) there is now a static equation, while instead of the white noise u t there are now two factors, also called static factors, which are dynamically linked: F 1 t = u t , F 2 t = F 1, t − 1 . This transformation into a static representation, whose general form is x i t = λ i 1 F 1 t + ⋯ + λ i r F r t + ξ i t , is extremely convenient for estimation and forecasting of high-dimensional dynamic factor models. In particular, the factors F j t and the loadings λ i j can be consistently estimated from the principal components of the observable variables x i t . Assumption allowing consistent estimation of the factors and loadings are discussed in detail. Moreover, it is argued that in general the vector of the factors is singular; that is, it is driven by a number of shocks smaller than its dimension. This fact has very important consequences. In particular, singularity implies that the fundamentalness problem, which is hard to solve in structural vector autoregressive (VAR) analysis of macroeconomic aggregates, disappears when the latter are studied as part of a high-dimensional dynamic factor model.

Article

Cross-cultural measurement is an important topic in social work research and evaluation. Measuring health related concepts accurately is necessary for researchers and practitioners who work with culturally diverse populations. Social workers use measurements or instruments to assess health-related outcomes in order to identify risk and protective factors for vulnerable, disadvantaged populations. Culturally validated instruments are necessary, first, to identify the evidence of health disparities for vulnerable populations. Second, measurements are required to accurately capture health outcomes in order to evaluate the effectiveness of interventions for cross-cultural populations. Meaningful, appropriate, and practical research instruments, however, are not always readily available. They may have bias when used for populations from different racial and ethnic groups, tribal groups, immigration and refugee status, gender identities, religious affiliations, social class, and mental or physical abilities. Social work researchers must have culturally reliable and valid research instruments to accurately measure social constructs and ensure the validity of outcomes with cultural populations of interest. . In addition, culturally reliable and valid instruments are necessary for research which involves comparisons with different cultural groups. Instruments must capture the same conceptual understanding in outcomes across different cultural groups to create a basis for comparison. Cross-cultural instruments must also detect and ascertain the same magnitude in the changes in health outcomes, in order to accurately determine the impact of factors in the social environment as well as the influence of micro, mezzo, and macro-level interventions. This reference provides an overview of issues and techniques of cross-cultural measurement in social work research and evaluation. Applying systematic, methodological approaches to develop, collect, and assess cross-cultural measurements will lead to more reliable and valid data for cross-cultural groups.

Article

The Hou–Xue–Zhang q-factor model says that the expected return of an asset in excess of the risk-free rate is described by its sensitivities to the market factor, a size factor, an investment factor, and a return on equity (ROE) factor. Empirically, the q-factor model shows strong explanatory power and largely summarizes the cross-section of average stock returns. Most important, it fully subsumes the Fama–French 6-factor model in head-to-head spanning tests. The q-factor model is an empirical implementation of the investment-based capital asset pricing model (the Investment CAPM). The basic philosophy is to price risky assets from the perspective of their suppliers (firms), as opposed to their buyers (investors). Mathematically, the investment CAPM is a restatement of the net present value (NPV) rule in corporate finance. Intuitively, high investment relative to low expected profitability must imply low costs of capital, and low investment relative to high expected profitability must imply high costs of capital. In a multiperiod framework, if investment is high next period, the present value of cash flows from next period onward must be high. Consisting mostly of this next period present value, the benefits to investment this period must also be high. As such, high investment next period relative to current investment (high expected investment growth) must imply high costs of capital (to keep current investment low). As a disruptive innovation, the investment CAPM has broad-ranging implications for academic finance and asset management practice. First, the consumption CAPM, of which the classic Sharpe–Lintner CAPM is a special case, is conceptually incomplete. The crux is that it blindly focuses on the demand of risky assets, while abstracting from the supply altogether. Alas, anomalies are primarily relations between firm characteristics and expected returns. By focusing on the supply, the investment CAPM is the missing piece of equilibrium asset pricing. Second, the investment CAPM retains efficient markets, with cross-sectionally varying expected returns, depending on firms’ investment, profitability, and expected growth. As such, capital markets follow standard economic principles, in sharp contrast to the teachings of behavioral finance. Finally, the investment CAPM validates Graham and Dodd’s security analysis on equilibrium grounds, within efficient markets.

Article

Boele De Raad and Boris Mlačić

The field of dispositional traits of personality is best summarized in terms of five fundamental dimensions: the Big Five personality trait factors, namely Extraversion, Agreeableness, Conscientiousness, Emotional Stability, and Intellect. The Big Five find their origin in psycho-lexical work in which the lexicon of a language is scanned for all words that can inform about personality traits. The Big Five factors have emerged most articulately in Indo-European languages in Europe and the United States, and weaker versions have appeared in non-Indo-European languages. The model is most functional and detailed in a format that integrates simple structure and circular representations. Such a format gives the Big Five system great accommodative potential, meaning that many or most of the concepts developed in approaches other than the Big Five can be located in that system, thus enhancing the communication about personality traits in the field. The Big Five model has been applied in virtually all disciplines of psychology, including clinical, social, organizational, and developmental psychology. In particular, the Big Five have been found useful in the field of learning and education where the factor Conscientiousness has been identified as a strong predictor of academic performance, but where other factors of the Big Five also have been demonstrated to play important roles, often in moderating or mediating sense. The Big Five model has faced a number of critical issues, one of which concerns the criteria of inclusion of trait-descriptive words from the lexicon. With relaxed criteria, allowing more than just dispositional trait words (e.g., trait words that are predominantly evaluative in nature), additional dimensions may emerge beyond the Big Five, mostly conveying features of morality. An important issue regards the cross-cultural applicability of trait-descriptive dimensions. With a cross-cultural emphasis, possibly no more than three factors, expressive of traits of Extraversion, Agreeableness, and Conscientiousness, make the best chance for claims of universality. For a good understanding of traits representing the remaining Big Five dimensions, and also dimensions that have sometimes been identified beyond the Big Five, it is not only important to specify their regional applicability, but also to articulate differences in research methodology.