21-40 of 44 Results  for:

  • Research Methods x
Clear all


Innovation Indicators  

Fred Gault and Luc Soete

Innovation indicators support research on innovation and the development of innovation policy. Once a policy has been implemented, innovation indicators can be used to monitor and evaluate the result, leading to policy learning. Producing innovation indicators requires an understanding of what innovation is. There are many definitions in the literature, but innovation indicators are based on statistical measurement guided by international standard definitions of innovation and of innovation activities. Policymakers are not just interested in the occurrence of innovation but in the outcome. Does it result in more jobs and economic growth? Is it expected to reduce carbon emissions, to advance renewable energy production and energy storage? How does innovation support the Sustainable Development Goals? From the innovation indicator perspective, innovation can be identified in surveys, but that only shows that there is, or there is not, innovation. To meet specific policy needs, a restriction can be imposed on the measurement of innovation. The population of innovators can be divided into those meeting the restriction, such as environmental improvements, and those that do not. In the case of innovation indicators that show a change over time, such as “inclusive innovation,” there may have to be a baseline measurement followed by a later measurement to see if inclusiveness is present, or growing, or not. This may involve social as well as institutional surveys. Once the innovation indicators are produced, they can be made available to potential users through databases, indexes, and scoreboards. Not all of these are based on the statistical measurement of innovation. Some use proxies, such as the allocation of financial and human resources to research and development, or the use of patents and academic publications. The importance of the databases, indexes, and scoreboards is that the findings may be used for the ranking of “innovation” in participating countries, influencing their behavior. While innovation indicators have always been influential, they have the potential to become more so. For decades, innovation indicators have focused on innovation in the business sector, while there have been experiments on measuring innovation in the public (general government sector and public institutions) and the household sectors. Historically, there has been no standard definition of innovation applicable in all sectors of the economy (business, public, household, and non-profit organizations serving households sectors). This changed with the Oslo Manual in 2018, which published a general definition of innovation applicable in all economic sectors. Applying a general definition of innovation has implications for innovation indicators and for the decisions that they influence. If the general definition is applied to the business sector, it includes product innovations that are made available to potential users rather than being introduced on the market. The product innovation can be made available at zero price, which has influence on innovation indicators that are used to describe the digital transformation of the economy. The general definition of innovation, the digital transformation of the economy, and the growing importance of zero price products influence innovation indicators.


Institutional Logics  

Heather A. Haveman and Gillian Gualtieri

Research on institutional logics surveys systems of cultural elements (values, beliefs, and normative expectations) by which people, groups, and organizations make sense of and evaluate their everyday activities, and organize those activities in time and space. Although there were scattered mentions of this concept before 1990, this literature really began with the 1991 publication of a theory piece by Roger Friedland and Robert Alford. Since that time, it has become a large and diverse area of organizational research. Several books and thousands of papers and book chapters have been published on this topic, addressing institutional logics in sites as different as climate change proceedings of the United Nations, local banks in the United States, and business groups in Taiwan. Several intellectual precursors to institutional logics provide a detailed explanation of the concept and the theory surrounding it. These literatures developed over time within the broader framework of theory and empirical work in sociology, political science, and anthropology. Papers published in ten major sociology and management journals in the United States and Europe (between 1990 and 2015) provide analysis and help to identify trends in theoretical development and empirical findings. Evaluting these trends suggest three gentle corrections and potentially useful extensions to the literature help to guide future research: (1) limiting the definition of institutional logic to cultural-cognitive phenomena, rather than including material phenomena; (2) recognizing both “cold” (purely rational) cognition and “hot” (emotion-laden) cognition; and (3) developing and testing a theory (or multiple related theories), meaning a logically interconnected set of propositions concerning a delimited set of social phenomena, derived from assumptions about essential facts (axioms), that details causal mechanisms and yields empirically testable (falsifiable) hypotheses, by being more consistent about how we use concepts in theoretical statements; assessing the reliability and validity of our empirical measures; and conducting meta-analyses of the many inductive studies that have been published, to develop deductive theories.


Interrater Agreement and Interrater Reliability: Implications for Multilevel Research  

Jenell L. S. Wittmer and James M. LeBreton

Statistics used to index interrater similarity are prevalent in many areas of the social sciences, with multilevel research being one of the most common domains for estimating interrater similarity. Multilevel research spans multiple hierarchical levels, such as individuals, teams, departments, and the organization. There are three main research questions that multilevel researchers answer using indices of interrater agreement and interrater reliability: (a) Does the nesting of lower-level units (e.g., employees) within higher-level units (e.g., work teams) result in the non-independence of residuals, which is an assumption of the general linear model?; (b) Is there sufficient agreement between scores on measures collected from lower-level units (e.g., employees perceptions of customer service climate) to justify aggregating data to the higher-level (e.g., team-level climate)?; and (c) Following data aggregation, how effective are the higher-level unit means at distinguishing between those higher levels (e.g., how reliably do team climate scores distinguish between the teams)? Interrater agreement and interrater reliability refer to the extent to which lower-level data nested or clustered within a higher-level unit are similar to one another. While closely related, interrater agreement and reliability differ from one another in how similarity is defined. Interrater reliability is the relative consistency in lower-level data. For example, to what degree do the scores assigned by raters tend to correlate with one another? Alternatively, interrater agreement is the consensus of the lower-level data points. For example, estimates of interrater agreement are used to determine the extent to which ratings made by judges/observers could be considered interchangeable or equivalent in terms of their values. Thus, while interrater agreement and reliability both estimate the similarity of ratings by judges/observers, but they define interrater similarity in slightly different ways, and these statistics are suited to address different types of research questions. The first research question that these statistics address, the issue of non-independence, is typically measured using an interclass correlation statistic that is a function of both interrater reliability and agreement. However, in the context of non-independence, the intraclass correlation is most often interpreted as an effect size. The second multilevel research question, concerning adequate agreement to aggregate lower-level data to a higher level, would require a measure on interrater agreement, as the research is looking for consensus among raters. Finally, the third multilevel research question, concerning the reliability of higher-level means, not only requires a different variation of the intraclass correlation, but is also a function of both interrater reliability and agreement. Multilevel research requires researchers to appropriately apply interrater agreement and/or reliability statistics to their data, as well as follow best practices for calculating and interpreting these statistics.


Intersectionality Theory and Practice  

Doyin Atewologun

Intersectionality is a critical framework that provides us with the mindset and language for examining interconnections and interdependencies between social categories and systems. Intersectionality is relevant for researchers and for practitioners because it enhances analytical sophistication and offers theoretical explanations of the ways in which heterogeneous members of specific groups (such as women) might experience the workplace differently depending on their ethnicity, sexual orientation, and/or class and other social locations. Sensitivity to such differences enhances insight into issues of social justice and inequality in organizations and other institutions, thus maximizing the chance of social change. The concept of intersectional locations emerged from the racialized experiences of minority ethnic women in the United States. Intersectional thinking has gained increased prominence in business and management studies, particularly in critical organization studies. A predominant focus in this field is on individual subjectivities at intersectional locations (such as examining the occupational identities of minority ethnic women). This emphasis on individuals’ experiences and within-group differences has been described variously as “content specialization” or an “intracategorical approach.” An alternate focus in business and management studies is on highlighting systematic dynamics of power. This encompasses a focus on “systemic intersectionality” and an “intercategorical approach.” Here, scholars examine multiple between-group differences, charting shifting configurations of inequality along various dimensions. As a critical theory, intersectionality conceptualizes knowledge as situated, contextual, relational, and reflective of political and economic power. Intersectionality tends to be associated with qualitative research methods due to the central role of giving voice, elicited through focus groups, narrative interviews, action research, and observations. Intersectionality is also utilized as a methodological tool for conducting qualitative research, such as by researchers adopting an intersectional reflexivity mindset. Intersectionality is also increasingly associated with quantitative and statistical methods, which contribute to intersectionality by helping us understand and interpret the individual, combined (additive or multiplicative) effects of various categories (privileged and disadvantaged) in a given context. Future considerations for intersectionality theory and practice include managing its broad applicability while attending to its sociopolitical and emancipatory aims, and theoretically advancing understanding of the simultaneous forces of privilege and penalty in the workplace.


Limited Dependent Variables in Management Research  

Harry Bowen

A limited dependent variable (LDV) is an outcome or response variable whose value is either restricted to a small number of (usually discrete) values or limited in its range of values. The first type of LDV is commonly called a categorical variable; its value indicates the group or category to which an observation belongs (e.g., male or female). Such categories often represent different choice outcomes, where interest centers on modeling the probability each outcome is selected. An LDV of the second type arises when observations are drawn about a variable whose distribution is truncated, or when some values of a variable are censored, implying that some values are wholly or partially unobserved. Methods such as linear regression are inadequate for obtaining statistically valid inferences in models that involve an LDV. Instead, different methods are needed that can account for the unique statistical characteristics of a given LDV.


Longitudinal Designs for Organizational Research  

James M. Diefendorff, Faith Lee, and Daniel Hynes

Longitudinal research involves collecting data from the same entities on two or more occasions. Almost all organizational theories outline a longitudinal process in which one or more variables cause a subsequent change in other variables. However, the majority of empirical studies rely on research designs that do not allow for the proper assessment of change over time or the isolation of causal effects. Longitudinal research begins with longitudinal theorizing. With this in mind, a variety of time-based theoretical concepts are helpful for conceptualizing how a variable is expected to change. This includes when variables are expected to change, the form or shape of the change, and how big the change is expected to be. To aid in the development of causal hypotheses, researchers should consider the history of the independent and dependent variables (i.e., how they may have been changing before the causal effect is examined), the causal lag between the variables (i.e., how long it takes for the dependent variable to start changing as a result of the independent variable), as well as the permanence, magnitude, and rate of the hypothesized change in the dependent variable. After hypotheses have been formulated, researchers can choose among various research designs, including experimental, concurrent or lagged correlational, or time series. Experimental designs are best suited for inferring causality, while time series designs are best suited for capturing the specific timing and form of change. Lagged correlation designs are useful for examining the direction and magnitude of change in a variable between measurements. Concurrent correlational designs are the weakest for inferring change or causality. Theory should dictate the choice of design, and designs can be modified and/or combined as needed to address the research question(s) at hand. Next, researchers should pay attention to their sample selection, the operationalization of constructs, and the frequency and timing of measures. The selected sample must be expected to experience the theorized change, and measures should be gathered as often as is necessary to represent the theorized change process (i.e., when the change occurs, how long it takes to unfold, and how long it lasts). Experimental manipulations should be strong enough to produce theorized effects and measured variables should be sensitive enough to capture meaningful differences between individuals and also within individuals over time. Finally, the analytic approach should be chosen based on the research design and hypotheses. Analyses can range from t-test and analysis of variance for experimental designs, to correlation and regression for lagged and concurrent designs, to a variety of advanced analyses for time series designs, including latent growth curve modeling, coupled latent growth curve modeling, cross-lagged modeling, and latent change score modeling. A point worth noting is that researchers sometimes label research designs by the statistical analysis commonly paired with the design. However, data generated from a particular design can often be analyzed using a variety of statistical procedures, so it is important to clearly distinguish the research design from the analytic approach.


Mediation: Causal Mechanisms in Business and Management  

Patrick J. Rosopa, Phoebe Xoxakos, and Coleton King

Mediation refers to causation. Tests for mediation are common in business, management, and related fields. In the simplest mediation model, a researcher asserts that a treatment causes a mediator and that the mediator causes an outcome. For example, a practitioner might examine whether diversity training increases awareness of stereotypes, which, in turn, improves inclusive climate perceptions. Because mediation inferences are causal inferences, it is important to demonstrate that the cause actually precedes the effect, the cause and effect covary, and rival explanations for the causal effect can be ruled out. Although various experimental designs for testing mediation hypotheses are available, single randomized experiments and two randomized experiments provide the strongest evidence for inferring mediation compared with nonexperimental designs, where selection bias and a multitude of confounding variables can make causal interpretations difficult. In addition to experimental designs, traditional statistical approaches for testing mediation include causal steps, difference in coefficients, and product of coefficients. Of the traditional approaches, the causal steps method tends to have low statistical power; the product of coefficients method tends to provide adequate power. Bootstrapping can improve the performance of these tests for mediation. The general causal mediation framework offers a modern approach to testing for causal mechanisms. The general causal mediation framework is flexible. The treatment, mediator, and outcome can be categorical or continuous. The general framework not only incorporates experimental designs (e.g., single randomized experiments, two randomized experiments) but also allows for a variety of statistical models and complex functional forms.


Meta-Analysis as a Business Research Method  

Alexander D. Stajkovic and Kayla S. Stajkovic

Mounting complexity in the world, coupled with new discoveries and more journal space to publish the findings, have spurred research on a host of topics in just about every discipline of social science. Research forays have also generated unprecedented disagreements. For many topics, empirical findings exist but results are mixed: some show positive relationships, some show negative relationships, and some show no statistically significant relationship. How, then, do researchers go about discovering systematic variation across studies to understand and predict forces that impinge on human functioning? Historically, qualitative literature reviews were performed in conjunction with the counting of statistically significant effects. This approach fails to consider effect magnitudes and sample sizes, and thus its conclusions can be misleading. A more precise way to reach conclusions from research literature is via meta-analysis, defined as a set of statistical procedures that enable researchers to derive quantitative estimates of average and moderator effects across available studies. Since its introduction in 1976, meta-analysis has developed into an authoritative source of information for ascertaining the generalizability of research findings. Thus, it is perhaps not surprising that meta-analyses in the field of management garner, on average, three times as many citations as single studies. A framework for conducting meta-analysis explains why it should be used, outlines what it has yielded to society, and introduces the reader to a fundamental conception and a misconception. More specifics follow about data collection and study selection criteria and implications of publication bias. How to convert estimates from individual studies to a common scale to be able to average them, what to consider in choosing a meta-analytic method, how to compare the procedures, and what information to include when reporting results are presented next. The article concludes with a discussion of nuances and limitations, and suggestions for future research and practice. Science builds knowledge cumulatively from numerous studies, which, more often than not, differ in their characteristics (e.g., research design, participants, setting, sample size). Some findings are in concert and some are not. Through its quantitative foundations, conjoint with theory-guiding hypotheses, meta-analysis offers statistical means of analyzing disparate research designs and conflicting results and discovering consistencies in a seemingly inconsistent literature. Research conclusions reached by a theory-driven, well-conducted meta-analysis are almost certainly more accurate and reliable than those from any single study.


Meta-Analytic Structural Equation Modeling  

Mike W.-L. Cheung

Meta-analysis and structural equation modeling (SEM) are two popular statistical models in the social, behavioral, and management sciences. Meta-analysis summarizes research findings to provide an estimate of the average effect and its heterogeneity. When there is moderate to high heterogeneity, moderators such as study characteristics may be used to explain the heterogeneity in the data. On the other hand, SEM includes several special cases, including the general linear model, path model, and confirmatory factor analytic model. SEM allows researchers to test hypothetical models with empirical data. Meta-analytic structural equation modeling (MASEM) is a statistical approach combining the advantages of both meta-analysis and SEM for fitting structural equation models on a pool of correlation matrices. There are usually two stages in the analyses. In the first stage of analysis, a pool of correlation matrices is combined to form an average correlation matrix. In the second stage of analysis, proposed structural equation models are tested against the average correlation matrix. MASEM enables researchers to synthesize researching findings using SEM as the research tool in primary studies. There are several popular approaches to conduct MASEM, including the univariate-r, generalized least squares, two-stage SEM (TSSEM), and one-stage MASEM (OSMASEM). MASEM helps to answer the following key research questions: (a) Are the correlation matrices homogeneous? (b) Do the proposed models fit the data? (c) Are there moderators that can be used to explain the heterogeneity of the correlation matrices? The MASEM framework has also been expanded to analyze large datasets or big data with or without the raw data.


Missing Data in Research  

Hettie A. Richardson and Marcia J. Simmering

Nonresponse and the missing data that it produces are ubiquitous in survey research, but they are also present in archival and other forms of research. Nonresponse and missing data can be especially problematic in organizational contexts where the risks of providing personal or organizational data might be perceived as (or actually) greater than in public opinion contexts. Moreover, nonresponse and missing data are presenting new challenges with the advent of online and mobile survey technology. When observational units (e.g., individuals, teams, organizations) do not provide some or all of the information sought by a researcher and the reasons for nonresponse are systematically related to the survey topic, nonresponse bias can result and the research community may draw faulty conclusions. Due to concerns about nonresponse bias, scholars have spent several decades seeking to understand why participants choose not to respond to certain items and entire surveys, and how best to avoid nonresponse through actions such as improved study design, the use of incentives, and follow-up initiatives. At the same time, researchers recognize that it is virtually impossible to avoid nonresponse and missing data altogether, and as such, in any given study there will likely be a need to diagnose patterns of missingness and their potential for bias. There will likewise be a need to statistically deal with missing data by employing post hoc mechanisms that maximize the sample available for hypothesis testing and minimize the extent to which missing data obscures the underlying true characteristics of the dataset. In this connection, a large body of programmatic research supports maximum likelihood (ML) and multiple imputation (MI) as useful data replacement procedures; although in some situations, it might be reasonable to use simpler procedures instead. Despite strong support for these statistical techniques, organizational scholars have yet to embrace them. Instead they tend to rely on approaches such as listwise deletion that do not preserve underlying data characteristics, reduce the sample available for statistical analysis, and in some cases, actually exacerbate the potential problems associated with missing data. Although there are certainly remaining questions that can be addressed about missing data techniques, these techniques are also well understood and validated. There remains, however, a strong need for exploration into the nature, causes, and extent of nonresponse in various organizational contexts, such when using online and mobile surveys. Such research could play a useful role in helping researchers avoid nonresponse in organizational settings, as well as extend insight about how best and when to apply validated missing data techniques.


(Multi)Collinearity in Behavioral Sciences Research  

Dev K. Dalal

A statistical challenge many researchers face is collinearity (also known as multicollinearity). Collinearity refers to a situation in which predictors - independent variables, covariates, etc. - are linearly related to each other and typically are related strongly enough as to negatively impact one’s statistical analyses, results, and/or substantive interpretations. Collinearity can impact the results of general linear models (e.g., ordinary least squares regression, structural equation modeling) or generalized linear models (e.g., binary logistic regression; Poisson regression). Collinearity can cause (a) estimation/convergence challenges (particularly with iterative estimation methods), (b) inflated standard errors, as well as (c) biased, unstable, and/or uninterpretable parameter estimates. Due to the issues in the results, substantive interpretation of models with collinearity can be inaccurate, sometimes in significant ways (e.g., nonsignificant predictors that are in fact significantly related to the outcome). In standard linear models, researchers can make use of variance inflation factor (VIF) or tolerance (Tol) indices to detect potential collinearity. Although zero-order correlations may be useful for detecting collinearity in rare instances, most researchers will want to use VIF or Tol to capture the potential for collinearity resulting from linear combinations of predictors. For statistical models that use iterative estimation (e.g., generalized linear models), researchers can turn to condition indices. Researchers can address collinearity issues in a myriad of ways. This includes basing models on well-developed a priori theoretical propositions to avoid including empirically or conceptually redundant variables in one’s model—this includes the careful and theoretically appropriate consideration of control variables. In addition, researchers can use data reduction techniques to aggregate correlated covariates (e.g., principal components analysis or exploratory factor analysis), and/or use well-constructed and well-validated measurements so as to ensure that measurement of key variables are not related due to construct overlaps.


Multilevel Theory, Methods, and Analyses in Management  

Michael T. Braun, Steve W. J. Kozlowski, and Goran Kuljanin

Multilevel theory (MLT) details how organizational constructs and processes operate and interact within and across levels. MLT focuses on two different inter-level relationships: bottom-up emergence and top-down effects. Emergence is when individuals’ thoughts, feelings, and/or behaviors are shaped by interactions and come to manifest themselves as collective, higher-level phenomena. The resulting higher-level phenomena can be either common, shared states across all individuals (i.e., compositional emergence) or stable, unique, patterned individual-level states (i.e., compilational emergence). Top-down effects are those representing influences from higher levels on the thoughts, feelings, and/or behaviors of individuals or other lower-level units. To date, most theoretical and empirical research has studied the top-down effects of either contextual variables or compositional emerged states. Using predominantly self-report survey methodologies collected at a single time point, this research commonly aggregates lower-level responses to form higher-level representations of variables. Then, a regression-based technique (e.g., random coefficient modeling, structural equation modeling) is used to statistically evaluate the direction and magnitude of the hypothesized effects. The current state of the literature as well as the traditional statistical and methodological approaches used to study MLT create three important knowledge gaps: a lack of understanding of the process of emergence; how top-down and bottom-up relationships change over time; and how inter-individual relationships within collectives form, dissolve, and change. These gaps make designing interventions to fix or improve the functioning of organizational systems incredibly difficult. As such, it is necessary to broaden the theoretical, methodological, and statistical approaches used to study multilevel phenomena in organizations. For example, computational modeling can be used to generate precise, dynamic theory to better understand the short- and long-term implications of multilevel relationships. Behavioral trace data, wearable sensor data, and other novel data collection techniques can be leveraged to capture constructs and processes over time without the drawbacks of survey fatigue or researcher interference. These data can then be analyzed using cutting-edge social network and longitudinal analyses to capture phenomena not readily apparent in hierarchically nested cross-sectional research.


Natural Experiments in Business Research Methods  

Michael C. Withers and Chi Hon Li

Causal identification is an important consideration for organizational researchers as they attempt to develop a theoretical understanding of the causes and effects of organizational phenomena. Without valid causal identification, insights regarding organizational phenomena are challenging given their inherent complexity. In other words, organizational research will be limited in its scientific progression. Randomized controlled experiments are often suggested to provide the ideal study design necessary to address potential confounding effects and isolate true causal relationships. Nevertheless, only a few research questions lend themselves to this study design. In particular, the full randomization of subjects in the treatment and control group may not be possible due to the empirical constraints. Within the strategic management area, for example, scholars often use secondary data to examine research questions related to competitive advantage and firm performance. Natural experiments are increasingly recognized as a viable approach to identify causal relationships without true random assignment. Natural experiments leverage external sources of variation to isolate causal effects and avoid potentially confounding influences that often arise in observational data. Natural experiments require two key assumptions—the as-if random assignment assumption and the stable unit treatment value assumption. When these assumptions are met, natural experiments can be an important methodological approach for advancing causal understanding of organizational phenomena.


Necessary Condition Analysis (NCA) and Its Diffusion  

Jan Dul

Necessary condition analysis (NCA) understands cause–effect relations in terms of “necessary but not sufficient.” This means that without the right level of the cause, a certain effect cannot occur. This is independent of other causes; thus, the necessary condition can become a single bottleneck, critical factor, constraint, disqualifier, or so on that blocks the outcome when it is absent. NCA can be used as a stand-alone method or in multimethod research to complement regression-based methods such as multiple linear regression (MLR) and structural equation modeling (SEM), as well as methods like fuzzy set qualitative comparative analysis (fsQCA). The NCA method consists of four stages: formulation of necessary condition hypotheses, collection of data, analysis of data, and reporting of results. Based on existing methodological publications about NCA, guidelines for good NCA practice are summarized. These guidelines show how to conduct NCA with the NCA software and how to report the results. The guidelines support (potential) users, readers, and reviewers of NCA to become more familiar with the method and to understand how NCA should be applied, as well as how results should be reported. NCA’s rapid diffusion and broad applicability in the social, technical, and medical sciences is illustrated by the growth of the number of article publications with NCA, the diversity of disciplines where NCA is applied, and the geographical spread of researchers who apply NCA.


Organizational Neuroscience  

Sebastiano Massaro and Dorotea Baljević

Organizational neuroscience—a novel scholarly domain using neuroscience to inform management and organizational research, and vice versa—is flourishing. Still missing, however, is a comprehensive coverage of organizational neuroscience as a self-standing scientific field. A foundational account of the potential that neuroscience holds to advance management and organizational research is currently a gap. The gap can be addressed with a review of the main methods, systematizing the existing scholarly literature in the field including entrepreneurship, strategic management, and organizational behavior, among others.


Qualitative Comparative Analysis in Business and Management Research  

Johannes Meuer and Peer C. Fiss

During the last decade, qualitative comparative analysis (QCA) has become an increasingly popular research approach in the management and business literature. As an approach, QCA consists of both a set of analytical techniques and a conceptual perspective, and the origins of QCA as an analytical technique lie outside the management and business literature. In the 1980s, Charles Ragin, a sociologist and political scientist, developed a systematic, comparative methodology as an alternative to qualitative, case-oriented approaches and to quantitative, variable-oriented approaches. Whereas the analytical technique of QCA was developed outside the management literature, the conceptual perspective underlying QCA has a long history in the management literature, in particular in the form of contingency and configurational theory that have played an important role in management theories since the late 1960s. Until the 2000s, management researchers only sporadically used QCA as an analytical technique. Between 2007 and 2008, a series of seminal articles in leading management journals laid the conceptual, methodological, and empirical foundations for QCA as a promising research approach in business and management. These articles led to a “first” wave of QCA research in management. During the first wave—occurring between approximately 2008 and 2014—researchers successfully published QCA-based studies in leading management journals and triggered important methodological debates, ultimately leading to a revival of the configurational perspective in the management literature. Following the first wave, a “second” wave—between 2014 and 2018—saw a rapid increase in QCA publications across several subfields in management research, the development of methodological applications of QCA, and an expansion of scholarly debates around the nature, opportunities, and future of QCA as a research approach. The second wave of QCA research in business and management concluded with researchers’ taking stock of the plethora of empirical studies using QCA for identifying best practice guidelines and advocating for the rise of a “neo-configurational” perspective, a perspective drawing on set-theoretic logic, causal complexity, and counterfactual analysis. Nowadays, QCA is an established approach in some research areas (e.g., organization theory, strategic management) and is diffusing into several adjacent areas (e.g., entrepreneurship, marketing, and accounting), a situation that promises new opportunities for advancing the analytical technique of QCA as well as configurational thinking and theorizing in the business and management literature. To advance the analytical foundations of QCA, researchers may, for example, advance robustness tests for QCA or focus on issues of endogeneity and omitted variables in QCA. To advance the conceptual foundations of QCA, researchers may, for example, clarify the links between configurational theory and related theoretical perspectives, such as systems theory or complexity theory, or develop theories on the temporal dynamics of configurations and configurational change. Ultimately, after a decade of growing use and interest in QCA and given the unique strengths of this approach for addressing questions relevant to management research, QCA will continue to influence research in business and management.


Qualitative Designs and Methodologies for Business, Management, and Organizational Research  

Robert P. Gephart and Rohny Saylors

Qualitative research designs provide future-oriented plans for undertaking research. Designs should describe how to effectively address and answer a specific research question using qualitative data and qualitative analysis techniques. Designs connect research objectives to observations, data, methods, interpretations, and research outcomes. Qualitative research designs focus initially on collecting data to provide a naturalistic view of social phenomena and understand the meaning the social world holds from the point of view of social actors in real settings. The outcomes of qualitative research designs are situated narratives of peoples’ activities in real settings, reasoned explanations of behavior, discoveries of new phenomena, and creating and testing of theories. A three-level framework can be used to describe the layers of qualitative research design and conceptualize its multifaceted nature. Note, however, that qualitative research is a flexible and not fixed process, unlike conventional positivist research designs that are unchanged after data collection commences. Flexibility provides qualitative research with the capacity to alter foci during the research process and make new and emerging discoveries. The first or methods layer of the research design process uses social science methods to rigorously describe organizational phenomena and provide evidence that is useful for explaining phenomena and developing theory. Description is done using empirical research methods for data collection including case studies, interviews, participant observation, ethnography, and collection of texts, records, and documents. The second or methodological layer of research design offers three formal logical strategies to analyze data and address research questions: (a) induction to answer descriptive “what” questions; (b) deduction and hypothesis testing to address theory oriented “why” questions; and (c) abduction to understand questions about what, how, and why phenomena occur. The third or social science paradigm layer of research design is formed by broad social science traditions and approaches that reflect distinct theoretical epistemologies—theories of knowledge—and diverse empirical research practices. These perspectives include positivism, interpretive induction, and interpretive abduction (interpretive science). There are also scholarly research perspectives that reflect on and challenge or seek to change management thinking and practice, rather than producing rigorous empirical research or evidence based findings. These perspectives include critical research, postmodern research, and organization development. Three additional issues are important to future qualitative research designs. First, there is renewed interest in the value of covert research undertaken without the informed consent of participants. Second, there is an ongoing discussion of the best style to use for reporting qualitative research. Third, there are new ways to integrate qualitative and quantitative data. These are needed to better address the interplay of qualitative and quantitative phenomena that are both found in everyday discourse, a phenomenon that has been overlooked.


Qualitative Research: Foundations, Approaches, and Practices  

Thomas Greckhamer and Sebnem Cilesiz

Qualitative research is an umbrella term that is typically used in contrast to quantitative research and captures research approaches that predominantly rely on collecting and analyzing qualitative data (i.e., data in the form of words, still or moving images, and artifacts). Qualitative research encompasses a wide range of research approaches with different philosophical and theoretical foundations and empirical procedures. Different assumptions about reality and knowledge underlying these diverse approaches guide researchers with respect to epistemological and methodological questions and inform their choices regarding research questions, data collection, data analysis, and the writing of research accounts. While at present a few dominant approaches are commonly used by researchers, a rich repertoire of qualitative approaches is available to management researchers that has the potential to facilitate deeper and broader insights into management phenomena.


Sampling Strategies for Quantitative and Qualitative Business Research  

Vivien Lee and Richard N. Landers

Sampling refers to the process used to identify and select cases for analysis (i.e., a sample) with the goal of drawing meaningful research conclusions. Sampling is integral to the overall research process as it has substantial implications on the quality of research findings. Inappropriate sampling techniques can lead to problems of interpretation, such as drawing invalid conclusions about a population. Whereas sampling in quantitative research focuses on maximizing the statistical representativeness of a population by a chosen sample, sampling in qualitative research generally focuses on the complete representation of a phenomenon of interest. Because of this core difference in purpose, many sampling considerations differ between qualitative and quantitative approaches despite a shared general purpose: careful selection of cases to maximize the validity of conclusions. Achieving generalizability, the extent to which observed effects from one study can be used to predict the same and similar effects in different contexts, drives most quantitative research. Obtaining a representative sample with characteristics that reflect a targeted population is critical to making accurate statistical inferences, which is core to such research. Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias. In either case, careful planning is required to estimate an appropriate sample size before the start of data collection. In contrast to generalizability, transferability, the degree to which study findings can be applied to other contexts, is the goal of most qualitative research. This approach is more concerned with providing information to readers and less concerned with making generalizable broad claims for readers. Similar to quantitative research, choosing a population and sample are critical for qualitative research, to help readers determine likelihood of transfer, yet representativeness is not as crucial. Sample size determination in qualitative research is drastically different from that of quantitative research, because sample size determination should occur during data collection, in an ongoing process in search of saturation, which focuses on achieving theoretical completeness instead of maximizing the quality of statistical inference. Theoretically speaking, although quantitative and qualitative research have distinct statistical underpinnings that should drive different sampling requirements, in practice they both heavily rely on non-probability samples, and the implications of non-probability sampling is often not well understood. Although non-probability samples do not automatically generate poor-quality data, incomplete consideration of case selection strategy can harm the validity of research conclusions. The nature and number of cases collected must be determined cautiously to respect research goals and the underlying scientific paradigm employed. Understanding the commonalities and differences in sampling between quantitative and qualitative research can help researchers better identify high-quality research designs across paradigms.


Social Network Analysis in Organizations  

Jessica R. Methot, Nazifa Zaman, and Hanbo Shim

A social network is a set of actors—that is, any discrete entity in a network, such as a person, team, organization, place, or collective social unit—and the ties connecting them—that is, some type of relationship, exchange, or interaction between actors that serves as a conduit through which resources such as information, trust, goodwill, advice, and support flow. Social network analysis (SNA) is the use of graph-theoretic and matrix algebraic techniques to study the social structure, interactions, and strategic positions of actors in social networks. As a methodological tool, SNA allows scholars to visualize and analyze webs of ties to pinpoint the composition, content, and structure of organizational networks, as well as to identify their origins and dynamics, and then link these features to actors’ attitudes and behaviors. Social network analysis is a valuable and unique lens for management research; there has been a marked shift toward the use of social network analysis to understand a host of organizational phenomena. To this end, organizational network analysis (ONA) is centered on how employees, groups, and organizations are connected and how these connections provide a quantifiable return on human capital investments. Although criticisms have traditionally been leveled against social network analysis, the foundations of network science have a rich history, and ONA has evolved into a well-established paradigm and a modern-day trend in management research and practice.