Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Business and Management. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 05 March 2021

Meta-Analytic Structural Equation Modeling

• Mike W.-L. CheungMike W.-L. CheungDepartment of Psychology, National University of Singapore

Summary

Meta-analysis and structural equation modeling (SEM) are two popular statistical models in the social, behavioral, and management sciences. Meta-analysis summarizes research findings to provide an estimate of the average effect and its heterogeneity. When there is moderate to high heterogeneity, moderators such as study characteristics may be used to explain the heterogeneity in the data. On the other hand, SEM includes several special cases, including the general linear model, path model, and confirmatory factor analytic model. SEM allows researchers to test hypothetical models with empirical data. Meta-analytic structural equation modeling (MASEM) is a statistical approach combining the advantages of both meta-analysis and SEM for fitting structural equation models on a pool of correlation matrices. There are usually two stages in the analyses. In the first stage of analysis, a pool of correlation matrices is combined to form an average correlation matrix. In the second stage of analysis, proposed structural equation models are tested against the average correlation matrix. MASEM enables researchers to synthesize researching findings using SEM as the research tool in primary studies. There are several popular approaches to conduct MASEM, including the univariate-r, generalized least squares, two-stage SEM (TSSEM), and one-stage MASEM (OSMASEM). MASEM helps to answer the following key research questions: (a) Are the correlation matrices homogeneous? (b) Do the proposed models fit the data? (c) Are there moderators that can be used to explain the heterogeneity of the correlation matrices? The MASEM framework has also been expanded to analyze large datasets or big data with or without the raw data.

Introduction

It is of methodological and practical importance to understand how seemingly unrelated statistical methods are connected. Consider the development of the dominant modeling technique called structural equation modeling (SEM) as an example (e.g., Bentler, 1986; Bollen, 1989; Jöreskog, 1970). SEM provides a flexible modeling framework for testing complicated models involving latent and observed variables. It integrates ideas of latent variables in psychology, path models in sociology, and structural models in economics. The general linear model, path analysis, and confirmatory factor analytic model are some special cases of SEM. It has been found that some specialized models such as item response theory (e.g., Glockner-Rist & Hoijtink, 2003; Takane & Deleeuw, 1987), categorical data analysis (Muthén, 1984), and multilevel models (e.g., Bauer, 2003; Curran, 2003) can be integrated into the SEM framework. Analysis of missing data and robust test statistics on nonnormal data are also integrated into SEM. SEM provides a unified framework for social, behavioral, and management scientists to test their models with empirical data.

Meta-analysis, a term coined by Gene Glass, is “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” (Glass, 1976, p. 3). After Glass’s introduction to the scientific communities, meta-analysis has become the de facto standard in synthesizing research findings in many disciplines, including medicine, psychology, and management sciences. Many of the highly cited papers are meta-analyses. Meta-analysis and SEM are generally treated as two unrelated topics in the literature (see Cheung, 2008). They have their own journals—Structural Equation Modeling: A Multidisciplinary Journal and Research Synthesis Methods. Researchers with knowledge in one area may not be aware of the benefits, techniques, and issues surrounding knowledge of the other area. The separation of these two techniques hinders the development of methods utilizing the benefits of both techniques.

Meta-analytic structural equation modeling (MASEM) is a new development to combine meta-analysis with SEM. It has also been known as, for example, meta-analytic path analysis (Colquitt, LePine, & Noe, 2000), meta-analysis of factor analysis (Becker, 1996), meta-analytical structural equations analysis (Hom, Caranikas-Walker, Prussia, & Griffeth, 1992), path analysis of meta-analytically derived correlation matrices (Eby, Freeman, Rush, & Lance, 1999), SEM of a meta-analytic correlation matrix (Conway, 1999), path analysis based on meta-analytic findings (Tett & Meyer, 1993), and model-based meta-analysis (Becker, 2009) in the literature. Here, the generic term MASEM is used to represent this class of techniques.

Phenomena in the social, behavioral, and management sciences are complicated, involving different constructs and sources of influences. To study these phenomena, researchers have to simplify them into models involving a few key theoretical constructs. SEM is one popular modeling framework to test the empirical relationships among the constructs. Besides representing structural equation models as mathematical equations, research questions can also be formulated as graphical models. Figure 1 displays a two-factor confirmatory factor analytic model. One latent variable ($f1$) is measured by three indicators ($x1$ to $x3$), whereas the other latent variable ($f2$) is measured by another three indicators ($x4$ to $x6$). Observed and latent variables are represented by squares (or rectangles) and circles (or ellipses), respectively. One-way arrows represent the direct impact or regression coefficient, whereas two-way arrows show the covariances among the variables. When a two-way arrow applies to the same variable, it represents either the variance or its error variance. Using this set of notations, researchers may translate many complicated mathematical to graphical models for ease of understanding.

After collecting data, researchers may test their proposed model by comparing the model against the data. If the proposed models are consistent with the data, they can be tentatively used to describe the processes. On the other hand, if the proposed models are not consistent with the data, it suggests that the proposed models may not be good candidates to explain the underlying processes. Because the proposed models are only approximations of the complicated phenomena, researchers rarely believe that the proposed models are correct. Instead of relying on the statistical tests, many researchers prefer to use goodness-of-fit indices such as the root mean square error of approximation (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and nonnormed fit index (NNFI, also known as the Tucker-Lewis index), to assess how good the proposed models are.

Although SEM is a powerful tool in testing hypothesized models, it is recognized that findings in a single study rarely provide enough evidence on a topic of interest. There are several reasons. First, researchers may propose different theoretical models supported by their own data, and it is difficult to compare and synthesize them systematically. It means that it may not be easy to tell which model has better support empirically. Second, many researchers are reluctant to consider alternative models (e.g., MacCallum & Austin, 2000). Researchers may stop considering other theoretically plausible models once the data do not reject the proposed models, even though there may be better models to explain the data. Hence, conducting more empirical research does not necessarily decrease the uncertainty surrounding a particular topic if the findings from that research are inconsistent (National Research Council, 1992). More importantly, it has recently been argued that many published studies are not replicable (e.g., Francis, 2012; Open Science Collaboration, 2015). Researchers need to synthesize a pool of studies in order to advance science on a particular topic. MASEM provides a tool to address these issues by testing various theoretical models on a pool of correlation matrices.

Ideas for combining meta-analysis with SEM appeared in the literature in the late 1980s. For instance, using a total sample size of 1,474 participants from four studies, Schmidt, Hunter, and Outerbridge (1986) compared several path models of job experience on job knowledge, performance capability as measured by job sample tests, and supervisory ratings of job performance. Premack and Hunter (1988) studied models of six variables on the process of individual unionization decisions and tested their models against 14 studies with a total of 53,768 participants. More recent applications are the association between task conflict and group performance (de Wit, Greer, & Jehn, 2012), the relationship between competition and performance (Murayama & Elliot, 2012), and the effect of psychological ownership on attachment in the workspace (Zhang, Liu, Zhang, Xu, & Cheung, 2020). Several methodological papers (Bergh et al., 2016; Cheung & Hong, 2017; Landis, 2013; Sheng, Kong, Cortina, & Hou, 2016) and a special issue (Cheung & Hafdahl, 2016) have been published to summarize methodological issues in MASEM. Cheung (2015a) provides a book-length treatment on the statistical details on MASEM, whereas Jak (2015) gives a tutorial focusing on the two-stage SEM (TSSEM) approach, which will be introduced later.

Approaches to Conducting MASEM

MASEM usually involves two stages of analysis. In the first stage, the correlation matrices are meta-analytically combined to form an average correlation matrix. This stage of analysis helps to address the potential missing data (correlation matrices) in the primary studies. In the second stage of analysis, the average correlation matrix is used to fit structural equation models. Exact and approximate fit indices may be used to evaluate the appropriateness of the proposed models. It seems straightforward to conduct MASEM—conducting a meta-analysis and then fitting a structural equation model on the average correlation matrix. However, many researchers ignore the fact that meta-analysis and SEM are based on different terminologies and statistical assumptions. Blindly combining them may lead to incorrect statistical influences (Cheung, 2019). Next, some of these approaches to conducting MASEM are reviewed.

There are two models in meta-analyses—fixed and random-effects models. A fixed-effects model, also known as a common-effects model, assumes that the sample effect sizes are functions of the true population effect sizes and the sampling errors. On the other hand, a random-effects model hypothesizes that each study has its own study-specific population effect sizes. Furthermore, it is assumed that the study-specific population effect sizes follow a multivariate distribution with the average population effects and heterogeneity variances and covariances. This article mainly focuses on the random-effects model as the fixed-effects model is a special case of the random-effects model when the heterogeneity variances and covariances are zero.

Before introducing the specific approaches, the article outlines some key procedures and decisions in conducting MASEM summarized by Jak and Cheung (2020) (see Cheung [2015a], and Viswesvaran & Ones [1995] for more details, and also Cooper [2010] for some general issues in meta-analysis):

1.

Identify key research questions, constructs, measurements, and structural equation models: Before conducting the MASEM, researchers have to formulate research questions and identify all the relevant key constructs, measurement models, and structural equation models. If the proposed models are too small with too few variables, it may limit the usefulness of the findings. On the other hand, there may be lots of incomplete data if the proposed models are too large. In the worst-case scenario, there may be no study or just a few studies in some of the elements in the correlation matrix. If this happens, the proposed models have to be revised or trimmed down to remove the constructs without data. Researchers may have to find a balance in formulating the models. Moreover, it is also advisable to formulate several theoretically meaningful models based on the literature for future testing. This makes the analyses more theoretically driven.

2.

Formulate clear inclusion and exclusion criteria. This step is essential in all meta-analyses, including MASEM because it provides theoretical justifications on whether the selected studies can be meaningfully combined. For example, researchers may need to answer whether it makes sense to combine studies from different populations, such as student samples versus working samples. The inclusion and exclusion criteria will affect the generalizability of the findings. If the data are based on student samples, it cannot be generalized to the work environment with adults.

3.

Identify and extract the relevant data, including correlation matrices, sample sizes, and study characteristics (moderators). The primary inputs in a meta-analysis are the effect sizes and their sampling variances. In the context of MASEM, the correlation matrices and their sample sizes are needed. There are likely incomplete data because the primary studies are conducted by different researchers independently. Different approaches may handle incomplete data differently. Moreover, researchers should attempt to retrieve unpublished data, such as dissertations, conference presentations, and technical reports, to minimize publication bias’s influence. One common test is to treat the published versus unpublished data as a moderator. Whether the findings are influenced by the publication type can then be tested.

4.

Choose an appropriate approach to combine the correlation matrices and fit the structural equation models. There are several approaches to conduct MASEM. Generally speaking, they can be classified under the univariate approach and multivariate approach. The univariate approach, for instance, the univariate-r (Viswesvaran & Ones, 1995), treats the elements of the correlation matrices as if they were independent in combining the correlation matrices. The average correlation matrix is then used to fit structural equation models as if it was an observed covariance matrix. On the other hand, the multivariate approach, for example, the generalized least squares (GLS; Becker, 1992, 1995), the TSSEM (Cheung, 2014a; Cheung & Chan, 2005b, 2009) and the one-stage MASEM (OSMASEM; Jak & Cheung, 2020), takes the dependence of the correlations in the correlation matrix into account when meta-analyzing the correlation matrices. More importantly, the multivariate approach also considers the estimation uncertainty in fitting the structural equation models. Readers may refer to Cheung (2015b) and Cheung and Hafdahl (2016) for details on the differences between these approaches.

Univariate Approach

The most popular method is the univariate-r approach proposed by Viswesvaran and Ones (1995). The first step of the univariate-r is to conduct several univariate meta-analyses (e.g., Hunter & Schmidt, 1990) on the correlation coefficients. Suppose that the structural equation model involves five variables, there is a total of $5×4/2$, or $10$, correlation coefficients of the bivariate relationships in the model. Researchers conduct $10$ univariate meta-analyses on these bivariate relationships. After the first stage of analysis, there is a $5×5$ average correlation matrix. This correlation matrix is then used as if it was an observed covariance matrix in fitting structural equation models using standard SEM packages such as Mplus or Amos in stage-two analysis. The main advantage of this approach is its ease of application—researchers just need to conduct several meta-analyses and SEM. It was found (Rosopa & Kim, 2017; Sheng et al., 2016) that the univariate-r approach dominates the literature of human resource management, management, and organizational studies.

Although the univariate approach is easy to apply, it suffers from several statistical difficulties (Cheung, 2015b; Cheung & Chan, 2005b). The first step of meta-analysis in combining correlation matrices is fine if researchers are only interested in testing the average correlation coefficients. However, there are several statistical issues in fitting structural equation models. Because of the incomplete data in the primary studies, pairwise aggregation (or deletion) is usually used to meta-analyze the correlation coefficients in the univariate-$r$ approach. Elements of the average correlation matrix are likely based on different numbers of studies, whereas a single sample size is required in SEM. Several ad-hoc procedures have been used to calculate an “average” sample size in SEM. These include, for example, the arithmetic mean (Premack & Hunter, 1988), the harmonic mean (Conway, 1999), the median (Brown & Peterson, 1993), or the total (Tett & Meyer, 1993) of the sample sizes based on the synthesized correlation coefficients. As the test statistics, some goodness-of-fit indices, and the standard errors of parameter estimates depend on the sample size used in SEM (Bollen, 1989, 1990). Using different sample sizes in the analysis may lead to different results and conclusions. No matter which sample size is used, it is hard to obtain the correct test statistics and standard errors.

The average correlation matrix elements are likely combined using a pairwise aggregation because of missing data. It is found that such matrices may be nonpositive definite, meaning that the observed correlations are either impossible or inconsistent. One example of nonpositive definite matrix is that the correlation is larger than 1. If this happens, these correlation matrices cannot be used to fit structural equation models. Even if the average correlation matrix is positive definite, its statistical properties are still questionable in SEM because different elements of the pooled correlation matrix are probably based on different samples (Marsh, 1998; Wothke, 1993).

The third problem is ignoring the sampling uncertainty across studies. After obtaining an average correlation matrix, the average correlation coefficients’ precision is not incorporated in fitting the structural equation models. Let us consider a simple example with a $3×3$ average correlation matrix with three correlation coefficients. For ease of illustration, suppose that the average correlation coefficients of these three correlation coefficients are all $0.5$ but with different precision (sampling variances = $0.1$, $0.2$, and $0.3$). When this average correlation matrix is used in fitting the structural equation model, the SEM packages, for instance, Mplus or Amos, do not know that some correlation coefficients are more precise than others. The lack of information on the precision will lead to biased test statistics and standard errors in SEM.

The last issue is analyzing a correlation matrix as if it was a covariance matrix. It is well-known in the SEM literature that it is advisable not to analyze the correlation matrix as if it was a covariance matrix. It is because the diagonals of a correlation matrix are always one, whereas the diagonals of a covariance matrix can be any non-negative values. The diagonals of a covariance matrix carry information, whereas the diagonals of a correlation matrix do not. Specifically, the chi-square statistics, goodness-of-fit indices, and the standard errors of parameter estimates may be incorrect if a correlation matrix is analyzed as if it was a covariance matrix (Cudeck, 1989; Jöreskog & Sörbom, 1996). The impact of analyzing a correlation matrix is more noticeable when there are constraints imposed on the models. It should be noted that correlation matrices can be correctly analyzed in SEM (Bentler & Savalei, 2010). The problem here is that these techniques cannot be implemented in the univariate-r approach. Simulation studies (Cheung & Chan, 2005b; Jak & Cheung, 2020) have shown that the test statistics of the univariate-r approach are overestimated while the standard errors are underestimated.

Generalized Least Squares

Becker (1992, 1995) proposed a GLS approach to conduct MASEM. Essentially, it is also a two-step approach. In the first step of the analysis, the correlation matrices are meta-analyzed via a multivariate approach by taking the sampling variance–covariance matrices of the correlations into account. Suppose that $Ri$ is a sample correlation matrix in the ith study. It can be transformed into a vector of correlation coefficients via the $vechs()$ operator, which takes the elements of the lower triangle without the diagonals by a column-order major, that is, $ri=vechs(Ri)$. For example, if $Ri$ is a $5×5$ correlation matrix, $ri$ will be a column vector with $10$ correlation coefficients. The model for a multivariate meta-analysis is

$Display mathematics$(1)

where $ρ$ is the vector of average correlations, $ui$ is the vector of random effects, and $ei$ is the vector of error variances. The heterogeneity matrix $Tρ2=Var(ui)$ indicates how heterogeneous the population correlation coefficients are, whereas the variance–covariance matrix of $Vi=Var(ei)$ is assumed known and calculated before the analysis (Olkin & Siotani, 1976). The variance–covariance matrices of the sampling errors $Vi$ are getting smaller and smaller when the sampling size is getting bigger and bigger. In contrast, the heterogeneity matrix $Tρ2$ represents the true variability of the study-specific correlation coefficients in the population level.

After the estimation, there is an average correlation vector $ρ^$, its asymptotic sampling covariance matrix $V^p$, which indicates the precision of the estimates, and a heterogeneity matrix $T^ρ2$. Becker used some matrix calculations to obtain the parameter estimates of regression models and some path models. She also showed how the regression coefficients’ standard errors could be obtained via the multivariate delta method. The GLS approach solves all the problems mentioned in the univariate approach. However, the main limitation is that it is not easy to apply it as matrix calculations are required. Moreover, it is also limited to fitting regression models and some path models without any latent variables.

Two-Stage Structural Equation Modeling

Building on Becker’s idea, Cheung (2014a; Cheung & Chan, 2005b, 2009) proposed a TSSEM approach to conduct MASEM. The statistical model is nearly the same as in Becker (1992). The critical difference is that the TSSEM approach uses the SEM framework in both stages of analysis. Therefore, advanced features, including handling missing data with full-information maximum likelihood (FIML) estimation, likelihood-based confidence intervals, multiple-group analysis, and robust standard errors, can be applied in MASEM.

In the first stage of analysis, the model is the same as that in equation (1). Instead of using the GLS as in Becker’s approach, the TSSEM approach uses FIML estimation. FIML is unbiased and efficient in handling missing data (correlation coefficients in MASEM) when the missingness is either missing completely at random (MCAR) or missing at random (MAR; Enders, 2010). Theoretical and computer simulations support that it works better than the listwise deletion or pairwise deletion in the univariate approach. When the missing data are missing not at random (MNAR), for example, in the presence of publication bias, no approach is unbiased (Furlow & Beretvas, 2005). However, the bias of the FIML is still less than that of the pairwise deletion (Jamshidian & Bentler, 1999). Thus, FIML is usually the preferred choice in handling missing data in data analysis (Enders, 2010).

In stage-two analysis, researchers propose a structural equation model, which can be a regression model, path model, confirmatory factor analytic model, or a full structural equation model, on the correlation matrix. The notation for the proposed correlation structure is $ρ=ρ(θ)$, meaning that the population correlation vector $ρ$ is a function of some unknown parameters $θ$. Because the estimates from stage-one stage one analysis are being used as inputs in stage-two analysis, the “hat” in $ρ^$ and $V^ρ$ must be gotten rid of to avoid confusion in the notations. Here, $r=ρ^$ and $Vr=V^ρ$ are used as the data inputs for the stage two analysis. Now, the structural equation model is fit by minimizing the fit function $F(θ^)$ with the weighted least squares (WLS) estimation method (Browne, 1984),

$Display mathematics$(2)

In words, the authors of this article try to find the parameter estimates that minimize the discrepancy between their data $r$ and the proposed model $ρ(θ)$ by taking the precision matrix $Vr$ into account. Although the WLS estimation was developed in the SEM literature, it follows the same principle of the estimation method in the meta-analysis (Cheung, 2010)—the average effect sizes are estimated by weighing the precision of the effect sizes. Standard test statistics and various goodness-of-fit indices may be used to test the exact and approximate fit of the proposed model. The parameter estimates divided by its standard error approximately follows a standard normal distribution.

In comparison to the univariate-$r$ approach, the authors of this article find that there are several advantages to the TSSEM approach. They have mentioned that there are four technical issues in the univariate-r approach, namely, (a) ad-hoc choice in choosing a sample size in SEM; (b) potential nonpositive definite in the average correlation matrix; (c) ignoring of the sampling variance in SEM; and (d) analysis of the correlation matrix as if it was a covariance matrix in SEM. The second issue is resolved by the use of FIML estimation to handle the missing data. Research has found that the listwise aggregation is more likely to create nonpositive definite matrices than other approaches such as FIML (Wothke, 1993). Regarding the other issues, the use of the WLS method addresses all of them. As can be seen in equation (2), the fit function does not involve any sample size. Therefore, the parameter estimates, standard errors, and the model fit statistics are identical regardless of the sample size used in the analysis (Cheung & Chan, 2005b). The sample size used in SEM, on the other hand, still slightly influences some goodness-of-fit indices that explicitly included the sample size in the calculations. In both the TSSEM and OSMASEM approaches, the total sample size is used for convenience.

When the degree of heterogeneity is enormous, an average effect may not be very useful. Researchers may want to identify moderators that may help to explain the heterogeneity. This step is similar to identifying the moderating effect in multiple regression. If the moderators are categorical, for instance, types of samples and regions, a subgroup analysis may be used (Cheung & Chan, 2005a; Jak & Cheung, 2018). In the stage-one analysis, the correlation matrices are meta-analyzed separately according to their groups. In the stage-two analysis, the same structural equation model is fitted on the averaged correlation matrices via a multiple-group SEM. Some of the parameters can be constrained equally to test research hypotheses related to the moderators. If the subgroup analysis fits much better than one analysis, it suggests that the categorical moderator may be used to explain the heterogeneity.

One-Stage MASEM

The TSSEM works pretty well when there is no moderator or with categorical moderators. When the moderators are continuous, researchers may have to categorize the continuous moderator into several categories. However, categorizing continuous moderators may lead to several issues, such as loss of information about individual differences, loss of power, the occurrence of spurious significant main effects or interactions, and risks of overlooking nonlinear effects (MacCallum, Zhang, Preacher, & Rucker, 2002). Jak and Cheung (2020) proposed a novel approach to address these issues. The model of OSMASEM is very similar to that in the GLS and TSSEM approaches by replacing $ρ$ with $ρ(θ)$, that is,

$Display mathematics$(3)

In plain language, the model means that the sample correlation matrix $ri$ is a function of the proposed correlation structure $ρ(θi)$ after taking the heterogeneity variance $Tρ2=Var(ui)$ and known sampling error $Vi=Var(ei)$ into account. Although the changes look minor, there are two huge differences. First, the proposed correlation structure $ρ(θi)$ is included in equation (3). Instead of two stages of analyses, as in the TSSEM approach, there is only one stage of estimation with the FIML estimation method. In other words, the proposed structural equation models are fitted directly on the correlation matrices without estimating an average correlation matrix. Second, there is a subscript $i$ in $ρ(θi)$, meaning that the correlation structure may vary according to the moderators in the ith study. Categorical and continuous moderators can be added to the proposed structural equation models by the use of definition variables. Definition variables allow researchers to set the structural parameters, for example, path coefficients as functions of observed data, for example, moderators. In other words, moderators can be used to predict the structural parameters in the models.

After fitting the OSMASEM, researchers may test the model fit by using the exact and approximate fit indices. When there are moderators, researchers may test whether the moderators are statistically better than the models without the moderators. Moreover, explained variances on the correlation coefficients might be calculated. It helps to study how good the moderators are in explaining the heterogeneity of the correlation coefficients. Both the TSSEM and OSMASEM approaches are implemented in an R package called metaSEM (Cheung, 2015b), which is freely available in the open-source R environment (R Development Core Team, 2020).

Other Approaches

There are also a few other alternatives to conduct MASEM. Cheung and Cheung (2016) differentiated two different conceptualizations of MASEM. The first one is the correlation-based MASEM. The idea is that the correlation coefficient is used as effect sizes in the meta-analyses. The unit of heterogeneity is the correlation coefficient. The univariate-$r$, GLS, TSSEM, and OSMASEM fall into this category. The second type is the parameter-based MASEM. Researchers first fit a structural equation model on each study to get the parameter estimates. The parameter estimates or functions of parameters such as regression coefficient, factor loading, indirect effect, $R2$, and reliability estimate are treated as effect sizes in the subsequent meta-analyses.

There are pros and cons of the correlation-based and parameter-based MASEM. The primary strength of the correlation-based MASEM is its ability to handle incomplete correlation coefficients in the first stage of analysis. For example, the TSSEM and OSMASEM use FIML to handle incomplete data effectively. The other strength of the correlation-based MASEM is that competing structural models can be tested and compared easily in the stage-two analysis. This step is crucial in MASEM as it allows researchers to test and compare several theoretically completing models. The major limitation of the correlation-based MASEM is that its heterogeneity is defined in the correlation coefficients rather than the structural parameters, such as regression coefficients and factor loadings. However, some researchers may have research questions targeting the heterogeneity of the structural parameters rather than the correlation coefficients.

On the other hand, parameter-based MASEM has some strengths. First and most important, research questions may involve functions of the structural parameters. For example, researchers conducting a meta-analysis on a mediation model may be more interested in how the direct effect and the indirect effect vary across studies. Second, parameter-based MASEM quantifies the heterogeneity of the parameter estimates across studies. Moderators may be used to explain the variability of these effect sizes. Despite its advantages, several key limitations are making the parameter-based MASEM less useful. First and essential, parameter-based MASEM does not work well when there are missing effect sizes. It is because a model, such as multiple regression, has to be applied in each primary study. When some correlation coefficients are missing, the multiple regression cannot be fitted. Another limitation is that parameter-based MASEM may not be appropriate for over-identified SEM models. If the proposed model does not fit the sample correlation matrix well, the validity of the parameter estimates (regression coefficients or factor loadings) and the subsequent analyses are questionable. Moreover, it is difficult to compare competing models with parameter-based MASEM as there is no overall model fitness in the models. By weighing the pros and cons, the correlation-based MASEM is more prevalent in MASEM.

Yu, Downes, Carter, and O’Boyle (2016) suggested an alternative approach called the full-information MASEM (FIMASEM) to address the heterogeneity issues in MASEM. In the first stage of analysis, either the univariate-r or TSSEM is used to meta-analyze the correlation matrices. In the second stage of analysis, the average correlation matrix and its heterogeneity are treated as population values to generate correlation matrices with a parametric bootstrap. The simulated correlation matrices are used to fit the proposed SEM. The parameter estimates and various goodness-of-fit indices of the bootstrapped correlation matrices are obtained. Cheung (2018) identified a couple of human errors and statistical issues in Yu et al. (2016). By running a new simulation study using the same conditions in Yu et al. (2016), Cheung (2018) found that the heterogeneity of the parameter estimates (e.g., path coefficients) of the FIMASEM is relatively unbiased. However, the goodness-of-fit indices of the FIMASEM are seriously misleading (see Yu, Downes, Carter, & O’Boyle, 2018, for their replies).

Ke, Zhang, and Tong (2019) proposed a Bayesian approach to conduct MASEM. The key idea is to treat the parameters $θi$ as random. The model is

$Display mathematics$(4)

where $θ¯i$ is the average of the parameters, $Tθ2=Var(θi)$ is the variance–covariance matrix (heterogeneity) of the parameters, and $Vi=Var(ei)$ is the known variance–covariance matrix of the sampling errors. However, one practical limitation is that it is not easy to implement the model as it involves lots of programming. As it is quite new, more studies are required to investigate the pros and cons of the Bayesian approach.

An Illustration

In this section, a data set is used by Mathieu, Kukenberger, D’Innocenzo, and Reilly (2015) to demonstrate how to conduct the TSSEM and OSMASEM approaches in order to answer empirical research questions in management science. It should be noted that this article mainly focuses on how to illustrate the techniques. The analyses are slightly different from those reported in Mathieu et al. (2015), which used the univariate-$r$ approach as the statistical method. Moreover, readers should refer to Mathieu et al. (2015) for substantive interpretations. These authors were interested in the reciprocity in the cohesion–performance ($C−P$) relationship. Four-by-four correlation matrices between two-time points of cohesion and performance of 15 studies were extracted. Moreover, two additional longitudinal studies on the same topic were conducted by the authors. As there is no missing data, there is a total of 17 four-by-four correlation matrices (see Mathieu et al., 2015, Table 1) with a total sample size of 737. Their Table 1 also provides information on the type of samples. These samples can be classified as either student or non-student.

Figure 2 shows the proposed path model (please ignore the estimates for the time being). The primary research questions are as follows: (a) What are the average correlation matrix and its heterogeneity variances? (b) What are the estimated parameters in fitting the model in Figure 2? (c) Are the effects between cohesion and performance symmetric? That is, is the effect $C1→C2$ the same as $P1→P2$, and is $C1→P2$ the same as $P1→C2$? (d) Does sample type (students versus non-students) moderate the path coefficients? The TSSEM approach is used to answer the first three research questions, whereas the OSMASEM approach is used to address the last research question. All the analyses were conducted in the metaSEM package (Cheung, 2015b) in the R statistical environment (R Development Core Team, 2020). The complete R code is available on mikewlcheung/code-in articles.

Table 1 shows the average correlation matrix (lower triangle in the matrix) and the estimated heterogeneity variances (upper triangle in the matrix). The average correlations between cohesion and performance vary from $0.25$ to $0.50$, which are small to moderate. The estimated heterogeneity variances vary from $0.00$ to $0.05$, which are typical in applied psychology (Bosco, Aguinis, Singh, Field, & Pierce, 2015). In stage-two analysis, the proposed model is fitted on the average correlation matrix. Because it is a just-identified model, there are no fit indices. The parameter estimates are shown in Figure 2. All of them are statistically significant at $α=0.05$. For the third research question, the hypothesis of equal regression coefficients of ($C1→C2$ and $P1→P2$) are ($C1→P2$ and $P1→C2$) is tested. The likelihood ratio statistic is not statistically significant, $χ2(df=2)=0.34,p=0.84$. It suggests that the effects are symmetric between cohesion and performance. The estimated coefficient for $C1→C2$ and $P1→P2$ is $0.50$, whereas the estimated coefficient for $C1→P2$ and $P1→C2$ is $0.13$.

Table 1. Parameter Estimates of Stage-One Analysis With the TSSEM Approach

C1

P1

C2

P2

C1

1.00

0.05

0.05

0.00

P1

0.25

1.00

0.00

0.05

C2

0.55

0.26

1.00

0.03

P2

0.25

0.50

0.37

1.00

Note. C1 and P1 are Cohesion and Performance measured at time 1. C2 and P2 are Cohesion and Performance measured at time 2. Elements in the lower triangle are the average correlation coefficients. Elements in the upper triangle are the estimated heterogeneity variances.

The OSMASEM approach was used to answer the fourth research question on whether the sample type (students versus non-students) moderates the path coefficients. The models were compared with and without the sample type; the likelihood ratio test is statistically significant with $χ2(df=4)=12.18,p=0.02$. It suggests that some of the path coefficients may be different in the sample type. Further analyses using the $z$ statistic show that the path coefficient of $P1→C2$ are different in the two groups ($0.35$ for non-student samples and $0.08$ for student samples) with $p2c_1=−0.28,z=−2.60,p=0.0094$ in the R output, whereas the other path coefficients are not statistically different in these two samples.

Further Applications

MASEM is primarily used to summarize correlation matrices to test theoretical models. The techniques, however, are not limited to analyze “summary statistics.” Researchers may apply MASEM even if raw data are available. There are at least two such applications. First, raw data may involve private or confidential information, which makes it challenging to share with other researchers. Second, the datasets are too big for typical workstations. It may take a long time to process. Given this approach only relies on the summary statistics (correlation matrices), researchers in management sciences are more likely to test models based on the correlation matrices.

Based on the popular split-apply-combine approach (Wickham, 2011) in R, Cheung and Jak (2016) proposed a split/analyze/meta-analyze (SAM) approach to analyze large datasets or data with only summary statistics such as correlation matrices. There are three steps in conducting the SAM analysis. In the first step, the data were split into many independent “studies.” There are two methods to split the data, depending on the research questions and the nature of the data. If the data are already structured in a hierarchical way, for instance, geographic locations or years, the data can be split based on these characteristics, which is termed “stratified split” in their paper. If there are no nested structures, an arbitrary (random) split on the data can be applied, which is termed “random split” in their paper. How the data are split has implications for how the results are to be combined in the final step.

In the second step, each “study” is independently analyzed with the appropriate statistical models, such as multiple regression, path model, structural equation model, and reliability analysis. As the large datasets have been broken down into smaller pieces, the stage-two analysis is more management. In the final step, the parameter estimates in each “study” are treated as effect sizes and combined by a multivariate meta-analysis. A random-effects model is used when the data are split by the stratified split; otherwise, a fixed-effects model is used. Sample characteristics may be used as moderators in the multivariate meta-analysis. Cheung and Jak (2016) shown how the SAM approach could be used to analyze data in the social and behavioral sciences. It is of interest to see if this approach can be applied to answer research questions in management sciences.

Another line of research in integrating meta-analysis and SEM is known as the SEM-based meta-analysis (Cheung, 2008). The SEM-based meta-analysis utilizes the SEM framework to conceptualize and conduct meta-analyses. In words, standard meta-analytic models such as those outlined in Borenstein, Hedges, Higgins, and Rothstein (2009) and Hedges and Olkin (1985) can be formulated as structural equation models and analyzed with SEM packages. The SEM-based meta-analysis has been extended to multivariate meta-analysis (Cheung, 2013), a three-level meta-analysis (Cheung, 2014b), and network meta-analysis (Tu & Wu, 2017). There are several advantages of integrating meta-analysis into the SEM framework. First, many existing SEM functions in handling missing data and non-normal data with robust statistics can be directly applied to meta-analytic data. Second, researchers may build structural equation models on the effect sizes with study characteristics as covariates (e.g., Shadish, 1992; Shadish & Sweeney, 1991). For example, Cheung (2015a) demonstrated how effect sizes could be used as mediators and moderators in predicting other effect sizes under the SEM-based meta-analysis.

Conclusion

MASEM is a useful technique to synthesize correlation matrices to fit and compare structural equation models. It helps to accumulate research findings to test theoretical models. MASEM has also been extended to analyze big datasets. Researchers are still actively developing new methods to improve the methodology in MASEM. To summarize, the combination of meta-analysis with SEM opens up many new research avenues.