Sampling refers to the process used to identify and select cases for analysis (i.e., a sample) with the goal of drawing meaningful research conclusions. Sampling is integral to the overall research process as it has substantial implications on the quality of research findings. Inappropriate sampling techniques can lead to problems of interpretation, such as drawing invalid conclusions about a population. Whereas sampling in quantitative research focuses on maximizing the statistical representativeness of a population by a chosen sample, sampling in qualitative research generally focuses on the complete representation of a phenomenon of interest. Because of this core difference in purpose, many sampling considerations differ between qualitative and quantitative approaches despite a shared general purpose: careful selection of cases to maximize the validity of conclusions.
Achieving generalizability, the extent to which observed effects from one study can be used to predict the same and similar effects in different contexts, drives most quantitative research. Obtaining a representative sample with characteristics that reflect a targeted population is critical to making accurate statistical inferences, which is core to such research. Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias. In either case, careful planning is required to estimate an appropriate sample size before the start of data collection.
In contrast to generalizability, transferability, the degree to which study findings can be applied to other contexts, is the goal of most qualitative research. This approach is more concerned with providing information to readers and less concerned with making generalizable broad claims for readers. Similar to quantitative research, choosing a population and sample are critical for qualitative research, to help readers determine likelihood of transfer, yet representativeness is not as crucial. Sample size determination in qualitative research is drastically different from that of quantitative research, because sample size determination should occur during data collection, in an ongoing process in search of saturation, which focuses on achieving theoretical completeness instead of maximizing the quality of statistical inference.
Theoretically speaking, although quantitative and qualitative research have distinct statistical underpinnings that should drive different sampling requirements, in practice they both heavily rely on non-probability samples, and the implications of non-probability sampling is often not well understood. Although non-probability samples do not automatically generate poor-quality data, incomplete consideration of case selection strategy can harm the validity of research conclusions. The nature and number of cases collected must be determined cautiously to respect research goals and the underlying scientific paradigm employed. Understanding the commonalities and differences in sampling between quantitative and qualitative research can help researchers better identify high-quality research designs across paradigms.
Article
Sampling Strategies for Quantitative and Qualitative Business Research
Vivien Lee and Richard N. Landers
Article
Social Network Analysis in Organizations
Jessica R. Methot, Nazifa Zaman, and Hanbo Shim
A social network is a set of actors—that is, any discrete entity in a network, such as a person, team, organization, place, or collective social unit—and the ties connecting them—that is, some type of relationship, exchange, or interaction between actors that serves as a conduit through which resources such as information, trust, goodwill, advice, and support flow. Social network analysis (SNA) is the use of graph-theoretic and matrix algebraic techniques to study the social structure, interactions, and strategic positions of actors in social networks. As a methodological tool, SNA allows scholars to visualize and analyze webs of ties to pinpoint the composition, content, and structure of organizational networks, as well as to identify their origins and dynamics, and then link these features to actors’ attitudes and behaviors. Social network analysis is a valuable and unique lens for management research; there has been a marked shift toward the use of social network analysis to understand a host of organizational phenomena. To this end, organizational network analysis (ONA) is centered on how employees, groups, and organizations are connected and how these connections provide a quantifiable return on human capital investments. Although criticisms have traditionally been leveled against social network analysis, the foundations of network science have a rich history, and ONA has evolved into a well-established paradigm and a modern-day trend in management research and practice.
Article
Sources of Knowledge in Firms
Hugo Pinto and Manuel Fernández-Esquinas
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Business and Management. Please check back later for the full article.
In order to obtain competitive advantages, firms have to make use of knowledge as the main element of their capacities for innovation and management. Innovation is a complex and collective process, resulting from different contexts, socioeconomic aspects, and specificities of firms that create nuanced management and policy implications. Sources of knowledge are varied, as each firm interacts with multiple types of actors to pursue its mission: partners and strategic allies, suppliers, customers, competitors, specialized organizations such as knowledge intensive business services, universities, technology centers, public research organizations, innovation intermediaries, and public administration bodies.
Different kinds of knowledge are relevant for the firms, both tacit and codified knowledge. Knowledge needs to be translated into capacity to act. Knowledge generation and absorption can be understood as two sides of the same coin. It is necessary to take into account factors that shape both facets and the relationship between the production, transfer, and valorization of knowledge. Influential factors concerning knowledge characteristics are related to tacitness and to the existing knowledge base. Contextual factors, such as the economic sector, technological intensity, the local buzz, and the insertion in global value chains are essential as environmental enablers for generating and absorbing knowledge. Finally, the internal characteristics of the firm are of crucial relevance, namely the existing innovation culture, leadership, and also the size or internal R&D capacities. These factors reinforce the dynamic capacities of the firm and the decision to engage in open innovation strategies or to give more importance to strategies that protect and codify knowledge, such as industrial property rights.
Article
Structural Equation Modelling
Wayne Crawford and Esther Lamarre Jean
Structural equation modelling (SEM) is a family of models where multivariate techniques are used to examine simultaneously complex relationships among variables. The goal of SEM is to evaluate the extent to which proposed relationships reflect the actual pattern of relationships present in the data. SEM users employ specialized software to develop a model, which then generates a model-implied covariance matrix. The model-implied covariance matrix is based on the user-defined theoretical model and represents the user’s beliefs about relationships among the variables. Guided by the user’s predefined constraints, SEM software employs a combination of factor analysis and regression to generate a set of parameters (often through maximum likelihood [ML] estimation) to create the model-implied covariance matrix, which represents the relationships between variables included in the model. Structural equation modelling capitalizes on the benefits of both factor analysis and path analytic techniques to address complex research questions. Structural equation modelling consists of six basic steps: model specification; identification; estimation; evaluation of model fit; model modification; and reporting of results.
Conducting SEM analyses requires certain data considerations as data-related problems are often the reason for software failures. These considerations include sample size, data screening for multivariate normality, examining outliers and multicollinearity, and assessing missing data. Furthermore, three notable issues SEM users might encounter include common method variance, subjectivity and transparency, and alternative model testing. First, analyzing common method variance includes recognition of three types of variance: common variance (variance shared with the factor); specific variance (reliable variance not explained by common factors); and error variance (unreliable and inexplicable variation in the variable). Second, SEM still lacks clear guidelines for the modelling process which threatens replicability. Decisions are often subjective and based on the researcher’s preferences and knowledge of what is most appropriate for achieving the best overall model. Finally, reporting alternatives to the hypothesized model is another issue that SEM users should consider when analyzing structural equation models. When testing a hypothesized model, SEM users should consider alternative (nested) models derived from constraining or eliminating one or more paths in the hypothesized model. Alternative models offer several benefits; however, they should be driven and supported by existing theory. It is important for the researcher to clearly report and provide findings on the alternative model(s) tested.
Common model-specific issues are often experienced by users of SEM. Heywood cases, nonidentification, and nonpositive definite matrices are among the most common issues. Heywood cases arise when negative variances or squared multiple correlations greater than 1.0 are found in the results. The researcher could resolve this by considering a small plausible value that could be used to constrain the residual. Non-positive definite matrices result from linear dependencies and/or correlations greater than 1.0. To address this, researchers can attempt to ensure all indicator variables are independent, inspect output manually for negative residual variances, evaluate if sample size is appropriate, or re-specify the proposed model. When used properly, structural equation modelling is a powerful tool that allows for the simultaneous testing of complex models.
Article
Survey Design
Don H. Kluemper
The use of surveys is prevalent in academic research in general, and particularly in business and management. As an example, self-report surveys alone are the most common data source in the social sciences. Survey design, however, involves a wide range of methodological decisions, each with its own strengths, limitations, and trade-offs. There are a broad set of issues associated with survey design, ranging from a breadth of strategic concerns to nuanced approaches associated with methodological and design alternatives. Further, decision points associated with survey design involve a series of trade-offs, as the strengths of a particular approach might come with inherent weaknesses. Surveys are couched within a broader scientific research process. First and foremost, the problem being studied should have sufficient impact, should be driven by a strong theoretical rationale, should employ rigorous research methods and design appropriate to test the theory, and should use appropriate analyses and employ best practices such that there is confidence in the scientific rigor of any given study and thus confidence in the results. Best practice requires balancing a range of methodological concerns and trade-offs that relate to the development of robust survey designs, including making causal inferences; internal, external, and ecological validity; common method variance; choice of data sources; multilevel issues; measure selection, modification, and development; appropriate use of control variables; conducting power analysis; and methods of administration. There are salient concerns regarding the administration of surveys, including increasing response rates as well as minimizing responses that are careless and/or reflect social desirability. Finally, decision points arise after surveys are administered, including missing data, organization of research materials, questionable research practices, and statistical considerations. A comprehensive understanding of this array of interrelated survey design issues associated with theory, study design, implementation, and analysis enhances scientific rigor.
Article
Testing and Interpreting Interaction Effects
Jeremy F. Dawson
Researchers often want to test whether the association between two or more variables depends on the value of a different variable. To do this, they usually test interactions, often in the form of moderated multiple regression (MMR) or its extensions. If there is an interaction effect, it means the relationship being tested does differ as the other variable (moderator) changes. While methods for determining whether an interaction exists are well established, less consensus exists about how to understand, or probe, these interactions. Many of the common methods (e.g., simple slope testing, regions of significance, use of Gardner et al.’s typology) have some reliance on post hoc significance testing, which is unhelpful much of the time, and also potentially misleading, sometimes resulting in contradictory findings. A recommended procedure for probing interaction effects involves a systematic description of the nature and size of interaction effects, considering the main effects (estimated after centering variables) as well as the size and direction of the interaction effect itself. Interaction effects can also be more usefully plotted by including both a greater range of moderator values and showing confidence bands.
Although two-way linear interactions are the most common in the literature, three-way interactions and nonlinear interactions are also often found. Again, methods for testing these interactions are well known, but procedures for understanding these more complex effects have received less attention—in part because of the greater complexity of what such interpretation involves. For three-way linear interactions, the slope difference test has become a standard form of interpretation and linking the findings with theory; however, this is also prone to some of the shortcomings described for post hoc probing of two-way effects. Descriptions of three-way interactions can be improved by using some of the same principles used for two-way interactions, as well as by the appropriate use of the slope difference test. For nonlinear effects, the complexity is greater still, and a different approach is needed to explain these effects more helpfully, focusing on describing the changing shape of the effects across values of the moderator(s). Some of these principles can also be carried forward into more complex models, such as multilevel modeling, structural equation modeling, and models that involve both mediation and moderation.