1-20 of 21 Results  for:

  • Research Methods x
Clear all

Article

James A. Muncy and Alice M. Muncy

Business research is conducted by both businesspeople, who have informational needs, and scholars, whose field of study is business. Though some of the specifics as to how research is conducted differs between scholarly research and applied research, the general process they follow is the same. Business research is conducted in five stages. The first stage is problem formation where the objectives of the research are established. The second stage is research design. In this stage, the researcher identifies the variables of interest and possible relationships among those variables, decides on the appropriate data source and measurement approach, and plans the sampling methodology. It is also within the research design stage that the role that time will play in the study is determined. The third stage is data collection. Researchers must decide whether to outsource the data collection process or collect the data themselves. Also, data quality issues must be addressed in the collection of the data. The fourth stage is data analysis. The data must be prepared and cleaned. Statistical packages or programs such as SAS, SPSS, STATA, and R are used to analyze quantitative data. In the cases of qualitative data, coding, artificial intelligence, and/or interpretive analysis is employed. The fifth stage is the presentation of results. In applied business research, the results are typically limited in their distribution and they must be addressed to the immediate problem at hand. In scholarly business research, the results are intended to be widely distributed through journals, books, and conferences. As a means of quality control, scholarly research usually goes through a double-blind review process before it is published.

Article

Eric Volmar and Kathleen M. Eisenhardt

Theory building from case studies is a research strategy that combines grounded theory building with case studies. Its purpose is to develop novel, accurate, parsimonious, and robust theory that emerges from and is grounded in data. Case research is well-suited to address “big picture” theoretical gaps and dilemmas, particularly when existing theory is inadequate. Further, this research strategy is particularly useful for answering questions of “how” through its deep and longitudinal immersion in a focal phenomenon. The process of conducting case study research includes a thorough literature review to identify an appropriate and compelling research question, a rigorous study design that involves artful theoretical sampling, rich and complete data collection from multiple sources, and a creative yet systematic grounded theory building process to analyze the cases and build emergent theory about significant phenomena. Rigorous theory building case research is fundamentally centered on strong emergent theory with precise theoretical logic and robust grounding in empirical data. Not surprisingly then, theory building case research is disproportionately represented among the most highly cited and award-winning research.

Article

Thomas Donaldson and Diana C. Robertson

Serious research into corporate ethics is nearly half a century old. Two approaches have dominated research; one is normative, the other empirical. The former, the normative approach, develops theories and norms that are prescriptive, that is, ones that are designed to guide corporate behavior. The latter, the empirical approach, investigates the character and causes of corporate behavior by examining corporate governance structures, policies, corporate relationships, and managerial behavior with the aim of explaining and predicting corporate behavior. Normative research has been led by scholars in the fields of moral philosophy, theology and legal theory. Empirical research has been led by scholars in the fields of sociology, psychology, economics, marketing, finance, and management. While utilizing distinct methods, the two approaches are symbiotic. Ethical and legal theory are irrelevant without factual context. Similarly, empirical theories are sterile unless translated into corporate guidance. The following description of the history of research in corporate ethics demonstrates that normative research methods are indispensable tools for empirical inquiry, even as empirical methods are indispensable tools for normative inquiry.

Article

Critical thinking is more than just fault-finding—it involves a range of thinking processes, including interpreting, analyzing, evaluating, inferencing, explaining, and self-regulating. The concept of critical thinking emerged from the field of education; however, it can, and should, be applied to other areas, particularly to research. Like most skills, critical thinking can be developed. However, critical thinking is also a mindset or a disposition that enables the consistent use and application of critical thought. Critical thinking is vital in business research, because researchers are expected to demonstrate a systematic approach and cogency in the way they undertake and present their studies, especially if they are to be taken seriously and for prospective research users to be persuaded by their findings. Critical thinking can be used in the key stages of many typical business research projects, specifically: the literature review; the use of inductive, deductive, and abductive reasoning and the relevant research design and methodology that follows; and contribution to knowledge. Research is about understanding and explaining phenomena, which is usually the starting point to solve a problem or to take advantage of an opportunity. However, to gain new insights (or to claim to), one needs to know what is already known, which is why many research projects start with a literature review. A literature review is a systematic way of searching and categorizing literature that helps to build the researchers’ confidence that they have identified and recognized prevailing (explicit) knowledge relevant to the development of their research questions. In a literature review, it is the job of the researcher to examine ideas presented through critical thinking and to scrutinize the arguments of the authors. Critical thinking is also clearly crucial for effective reasoning. Reasoning is the way people rationalize and explain. However, in the context of research, the three generally accepted distinct forms of reasoning (inductive, deductive, and abductive) are more analogous to specific approaches to shape how the literature, research questions, methods, and findings all come together. Inductive reasoning is making an inference based on evidence that researchers have in possession and extrapolating what may happen based on the evidence, and why. Deductive reasoning is a form of syllogism, which is an argument based on accepted premises and involves choosing the most appropriate alternative hypotheses. Finally, abductive reasoning is starting with an outcome and working backward to understand how and why, and by collecting data that can subsequently be decoded for significance (i.e., Is the identified factor directly related to the outcome?) and clarified for meaning (i.e., How did it contribute to the outcome?). Also, critical thinking is crucial in the design of the research method, because it justifies the researchers’ plan and action in collecting data that are credible, valid, and reliable. Finally, critical thinking also plays a role when researchers make arguments based on their research findings to ensure that claims are grounded in the evidence and the procedures.

Article

To understand and communicate research findings, it is important for researchers to consider two types of information provided by research results: the magnitude of the effect and the degree of uncertainty in the outcome. Statistical significance tests have long served as the mainstream method for statistical inferences. However, the widespread misinterpretation and misuse of significance tests has led critics to question their usefulness in evaluating research findings and to raise concerns about the far-reaching effects of this practice on scientific progress. An alternative approach involves reporting and interpreting measures of effect size along with confidence intervals. An effect size is an indicator of magnitude and direction of a statistical observation. Effect size statistics have been developed to represent a wide range of research questions, including indicators of the mean difference between groups, the relative odds of an event, or the degree of correlation among variables. Effect sizes play a key role in evaluating practical significance, conducting power analysis, and conducting meta-analysis. While effect sizes summarize the magnitude of an effect, the confidence intervals represent the degree of uncertainty in the result. By presenting a range of plausible alternate values that might have occurred due to sampling error, confidence intervals provide an intuitive indicator of how strongly researchers should rely on the results from a single study.

Article

Conducting credible and trustworthy research to inform managerial decisions is arguably the primary goal of business and management research. Research design, particularly the various types of experimental designs available, are important building blocks for advancing toward this goal. Key criteria for evaluating research studies are internal validity (the ability to demonstrate causality), statistical conclusion validity (drawing correct conclusions from data), construct validity (the extent to which a study captures the phenomenon of interest), and external validity (the generalizability of results to other contexts). Perhaps most important, internal validity depends on the research design’s ability to establish that the hypothesized cause and outcome are correlated, that variation in them occurs in the correct temporal order, and that alternative explanations of that relationship can be ruled out. Research designs vary greatly, especially in their internal validity. Generally, experiments offer the strongest causal inference, because the causal variables of interest are manipulated by the researchers, and because random assignment makes subjects comparable, such that the sources of variation in the variables of interest can be well identified. Natural experiments can exhibit similar internal validity to the extent that researchers are able to exploit exogenous events creating (quasi-)randomized interventions. When randomization is not available, quasi-experiments aim at approximating experiments by making subjects as comparable as possible based on the best available information. Finally, non-experiments, which are often the only option in business and management research, can still offer useful insights, particularly when changes in the variables of interest can be modeled by adopting longitudinal designs.

Article

Hypothesis testing is an approach to statistical inference that is routinely taught and used. It is based on a simple idea: develop some relevant speculation about the population of individuals or things under study and determine whether data provide reasonably strong empirical evidence that the hypothesis is wrong. Consider, for example, two approaches to advertising a product. A study might be conducted to determine whether it is reasonable to assume that both approaches are equally effective. A Type I error is rejecting this speculation when in fact it is true. A Type II error is failing to reject when the speculation is false. A common practice is to test hypotheses with the type I error probability set to 0.05 and to declare that there is a statistically significant result if the hypothesis is rejected. There are various concerns about, limitations to, and criticisms of this approach. One criticism is the use of the term significant. Consider the goal of comparing the means of two populations of individuals. Saying that a result is significant suggests that the difference between the means is large and important. But in the context of hypothesis testing it merely means that there is empirical evidence that the means are not equal. Situations can and do arise where a result is declared significant, but the difference between the means is trivial and unimportant. Indeed, the goal of testing the hypothesis that two means are equal has been criticized based on the argument that surely the means differ at some decimal place. A simple way of dealing with this issue is to reformulate the goal. Rather than testing for equality, determine whether it is reasonable to make a decision about which group has the larger mean. The components of hypothesis-testing techniques can be used to address this issue with the understanding that the goal of testing some hypothesis has been replaced by the goal of determining whether a decision can be made about which group has the larger mean. Another aspect of hypothesis testing that has seen considerable criticism is the notion of a p-value. Suppose some hypothesis is rejected with the Type I error probability set to 0.05. This leaves open the issue of whether the hypothesis would be rejected with Type I error probability set to 0.025 or 0.01. A p-value is the smallest Type I error probability for which the hypothesis is rejected. When comparing means, a p-value reflects the strength of the empirical evidence that a decision can be made about which has the larger mean. A concern about p-values is that they are often misinterpreted. For example, a small p-value does not necessarily mean that a large or important difference exists. Another common mistake is to conclude that if the p-value is close to zero, there is a high probability of rejecting the hypothesis again if the study is replicated. The probability of rejecting again is a function of the extent that the hypothesis is not true, among other things. Because a p-value does not directly reflect the extent the hypothesis is false, it does not provide a good indication of whether a second study will provide evidence to reject it. Confidence intervals are closely related to hypothesis-testing methods. Basically, they are intervals that contain unknown quantities with some specified probability. For example, a goal might be to compute an interval that contains the difference between two population means with probability 0.95. Confidence intervals can be used to determine whether some hypothesis should be rejected. Clearly, confidence intervals provide useful information not provided by testing hypotheses and computing a p-value. But an argument for a p-value is that it provides a perspective on the strength of the empirical evidence that a decision can be made about the relative magnitude of the parameters of interest. For example, to what extent is it reasonable to decide whether the first of two groups has the larger mean? Even if a compelling argument can be made that p-values should be completely abandoned in favor of confidence intervals, there are situations where p-values provide a convenient way of developing reasonably accurate confidence intervals. Another argument against p-values is that because they are misinterpreted by some, they should not be used. But if this argument is accepted, it follows that confidence intervals should be abandoned because they are often misinterpreted as well. Classic hypothesis-testing methods for comparing means and studying associations assume sampling is from a normal distribution. A fundamental issue is whether nonnormality can be a source of practical concern. Based on hundreds of papers published during the last 50 years, the answer is an unequivocal Yes. Granted, there are situations where nonnormality is not a practical concern, but nonnormality can have a substantial negative impact on both Type I and Type II errors. Fortunately, there is a vast literature describing how to deal with known concerns. Results based solely on some hypothesis-testing approach have clear implications about methods aimed at computing confidence intervals. Nonnormal distributions that tend to generate outliers are one source for concern. There are effective methods for dealing with outliers, but technically sound techniques are not obvious based on standard training. Skewed distributions are another concern. The combination of what are called bootstrap methods and robust estimators provides techniques that are particularly effective for dealing with nonnormality and outliers. Classic methods for comparing means and studying associations also assume homoscedasticity. When comparing means, this means that groups are assumed to have the same amount of variance even when the means of the groups differ. Violating this assumption can have serious negative consequences in terms of both Type I and Type II errors, particularly when the normality assumption is violated as well. There is vast literature describing how to deal with this issue in a technically sound manner.

Article

Rand R. Wilcox

Inferential statistical methods stem from the distinction between a sample and a population. A sample refers to the data at hand. For example, 100 adults may be asked which of two olive oils they prefer. Imagine that 60 say brand A. But of interest is the proportion of all adults who would prefer brand A if they could be asked. To what extent does 60% reflect the true proportion of adults who prefer brand A? There are several components to inferential methods. They include assumptions about how to model the probabilities of all possible outcomes. Another is how to model outcomes of interest. Imagine, for example, that there is interest in understanding the overall satisfaction with a particular automobile given an individual’s age. One strategy is to assume that the typical response Y , given an individuals age, X , is given by Y = β 0 + β 1 X , where the slope, β 1 , and intercept, β 0 , are unknown constants, in which case a sample would be used to make inferences about their values. Assumptions are also made about how the data were obtained. Was this done in a manner for which random sampling can be assumed? There is even an issue related to the very notion of what is meant by probability. Let μ denote the population mean of Y . The frequentist approach views probabilities in terms of relative frequencies and μ is viewed as a fixed, unknown constant. In contrast, the Bayesian approach views μ as having some distribution that is specified by the investigator. For example, it may be assumed that μ has a normal distribution. The point is that the probabilities associated with μ are not based on the notion of relative frequencies and they are not based on the data at hand. Rather, the probabilities associated with μ stem from judgments made by the investigator. Inferential methods can be classified into three types: distribution free, parametric, and non-parametric. The meaning of the term “non-parametric” depends on the situation as will be explained. The choice between parametric and non-parametric methods can be crucial for reasons that will be outlined. To complicate matters, the number of inferential methods has grown tremendously during the last 50 years. Even for goals that may seem relatively simple, such as comparing two independent groups of individuals, there are numerous methods that may be used. Expert guidance can be crucial in terms of understanding what inferences are reasonable in a given situation.

Article

Heather A. Haveman and Gillian Gualtieri

Research on institutional logics surveys systems of cultural elements (values, beliefs, and normative expectations) by which people, groups, and organizations make sense of and evaluate their everyday activities, and organize those activities in time and space. Although there were scattered mentions of this concept before 1990, this literature really began with the 1991 publication of a theory piece by Roger Friedland and Robert Alford. Since that time, it has become a large and diverse area of organizational research. Several books and thousands of papers and book chapters have been published on this topic, addressing institutional logics in sites as different as climate change proceedings of the United Nations, local banks in the United States, and business groups in Taiwan. Several intellectual precursors to institutional logics provide a detailed explanation of the concept and the theory surrounding it. These literatures developed over time within the broader framework of theory and empirical work in sociology, political science, and anthropology. Papers published in ten major sociology and management journals in the United States and Europe (between 1990 and 2015) provide analysis and help to identify trends in theoretical development and empirical findings. Evaluting these trends suggest three gentle corrections and potentially useful extensions to the literature help to guide future research: (1) limiting the definition of institutional logic to cultural-cognitive phenomena, rather than including material phenomena; (2) recognizing both “cold” (purely rational) cognition and “hot” (emotion-laden) cognition; and (3) developing and testing a theory (or multiple related theories), meaning a logically interconnected set of propositions concerning a delimited set of social phenomena, derived from assumptions about essential facts (axioms), that details causal mechanisms and yields empirically testable (falsifiable) hypotheses, by being more consistent about how we use concepts in theoretical statements; assessing the reliability and validity of our empirical measures; and conducting meta-analyses of the many inductive studies that have been published, to develop deductive theories.

Article

Statistics used to index interrater similarity are prevalent in many areas of the social sciences, with multilevel research being one of the most common domains for estimating interrater similarity. Multilevel research spans multiple hierarchical levels, such as individuals, teams, departments, and the organization. There are three main research questions that multilevel researchers answer using indices of interrater agreement and interrater reliability: (a) Does the nesting of lower-level units (e.g., employees) within higher-level units (e.g., work teams) result in the non-independence of residuals, which is an assumption of the general linear model?; (b) Is there sufficient agreement between scores on measures collected from lower-level units (e.g., employees perceptions of customer service climate) to justify aggregating data to the higher-level (e.g., team-level climate)?; and (c) Following data aggregation, how effective are the higher-level unit means at distinguishing between those higher levels (e.g., how reliably do team climate scores distinguish between the teams)? Interrater agreement and interrater reliability refer to the extent to which lower-level data nested or clustered within a higher-level unit are similar to one another. While closely related, interrater agreement and reliability differ from one another in how similarity is defined. Interrater reliability is the relative consistency in lower-level data. For example, to what degree do the scores assigned by raters tend to correlate with one another? Alternatively, interrater agreement is the consensus of the lower-level data points. For example, estimates of interrater agreement are used to determine the extent to which ratings made by judges/observers could be considered interchangeable or equivalent in terms of their values. Thus, while interrater agreement and reliability both estimate the similarity of ratings by judges/observers, but they define interrater similarity in slightly different ways, and these statistics are suited to address different types of research questions. The first research question that these statistics address, the issue of non-independence, is typically measured using an interclass correlation statistic that is a function of both interrater reliability and agreement. However, in the context of non-independence, the intraclass correlation is most often interpreted as an effect size. The second multilevel research question, concerning adequate agreement to aggregate lower-level data to a higher level, would require a measure on interrater agreement, as the research is looking for consensus among raters. Finally, the third multilevel research question, concerning the reliability of higher-level means, not only requires a different variation of the intraclass correlation, but is also a function of both interrater reliability and agreement. Multilevel research requires researchers to appropriately apply interrater agreement and/or reliability statistics to their data, as well as follow best practices for calculating and interpreting these statistics.

Article

Intersectionality is a critical framework that provides us with the mindset and language for examining interconnections and interdependencies between social categories and systems. Intersectionality is relevant for researchers and for practitioners because it enhances analytical sophistication and offers theoretical explanations of the ways in which heterogeneous members of specific groups (such as women) might experience the workplace differently depending on their ethnicity, sexual orientation, and/or class and other social locations. Sensitivity to such differences enhances insight into issues of social justice and inequality in organizations and other institutions, thus maximizing the chance of social change. The concept of intersectional locations emerged from the racialized experiences of minority ethnic women in the United States. Intersectional thinking has gained increased prominence in business and management studies, particularly in critical organization studies. A predominant focus in this field is on individual subjectivities at intersectional locations (such as examining the occupational identities of minority ethnic women). This emphasis on individuals’ experiences and within-group differences has been described variously as “content specialization” or an “intracategorical approach.” An alternate focus in business and management studies is on highlighting systematic dynamics of power. This encompasses a focus on “systemic intersectionality” and an “intercategorical approach.” Here, scholars examine multiple between-group differences, charting shifting configurations of inequality along various dimensions. As a critical theory, intersectionality conceptualizes knowledge as situated, contextual, relational, and reflective of political and economic power. Intersectionality tends to be associated with qualitative research methods due to the central role of giving voice, elicited through focus groups, narrative interviews, action research, and observations. Intersectionality is also utilized as a methodological tool for conducting qualitative research, such as by researchers adopting an intersectional reflexivity mindset. Intersectionality is also increasingly associated with quantitative and statistical methods, which contribute to intersectionality by helping us understand and interpret the individual, combined (additive or multiplicative) effects of various categories (privileged and disadvantaged) in a given context. Future considerations for intersectionality theory and practice include managing its broad applicability while attending to its sociopolitical and emancipatory aims, and theoretically advancing understanding of the simultaneous forces of privilege and penalty in the workplace.

Article

A limited dependent variable (LDV) is an outcome or response variable whose value is either restricted to a small number of (usually discrete) values or limited in its range of values. The first type of LDV is commonly called a categorical variable; its value indicates the group or category to which an observation belongs (e.g., male or female). Such categories often represent different choice outcomes, where interest centers on modeling the probability each outcome is selected. An LDV of the second type arises when observations are drawn about a variable whose distribution is truncated, or when some values of a variable are censored, implying that some values are wholly or partially unobserved. Methods such as linear regression are inadequate for obtaining statistically valid inferences in models that involve an LDV. Instead, different methods are needed that can account for the unique statistical characteristics of a given LDV.

Article

Meta-analysis and structural equation modeling (SEM) are two popular statistical models in the social, behavioral, and management sciences. Meta-analysis summarizes research findings to provide an estimate of the average effect and its heterogeneity. When there is moderate to high heterogeneity, moderators such as study characteristics may be used to explain the heterogeneity in the data. On the other hand, SEM includes several special cases, including the general linear model, path model, and confirmatory factor analytic model. SEM allows researchers to test hypothetical models with empirical data. Meta-analytic structural equation modeling (MASEM) is a statistical approach combining the advantages of both meta-analysis and SEM for fitting structural equation models on a pool of correlation matrices. There are usually two stages in the analyses. In the first stage of analysis, a pool of correlation matrices is combined to form an average correlation matrix. In the second stage of analysis, proposed structural equation models are tested against the average correlation matrix. MASEM enables researchers to synthesize researching findings using SEM as the research tool in primary studies. There are several popular approaches to conduct MASEM, including the univariate-r, generalized least squares, two-stage SEM (TSSEM), and one-stage MASEM (OSMASEM). MASEM helps to answer the following key research questions: (a) Are the correlation matrices homogeneous? (b) Do the proposed models fit the data? (c) Are there moderators that can be used to explain the heterogeneity of the correlation matrices? The MASEM framework has also been expanded to analyze large datasets or big data with or without the raw data.

Article

Hettie A. Richardson and Marcia J. Simmering

Nonresponse and the missing data that it produces are ubiquitous in survey research, but they are also present in archival and other forms of research. Nonresponse and missing data can be especially problematic in organizational contexts where the risks of providing personal or organizational data might be perceived as (or actually) greater than in public opinion contexts. Moreover, nonresponse and missing data are presenting new challenges with the advent of online and mobile survey technology. When observational units (e.g., individuals, teams, organizations) do not provide some or all of the information sought by a researcher and the reasons for nonresponse are systematically related to the survey topic, nonresponse bias can result and the research community may draw faulty conclusions. Due to concerns about nonresponse bias, scholars have spent several decades seeking to understand why participants choose not to respond to certain items and entire surveys, and how best to avoid nonresponse through actions such as improved study design, the use of incentives, and follow-up initiatives. At the same time, researchers recognize that it is virtually impossible to avoid nonresponse and missing data altogether, and as such, in any given study there will likely be a need to diagnose patterns of missingness and their potential for bias. There will likewise be a need to statistically deal with missing data by employing post hoc mechanisms that maximize the sample available for hypothesis testing and minimize the extent to which missing data obscures the underlying true characteristics of the dataset. In this connection, a large body of programmatic research supports maximum likelihood (ML) and multiple imputation (MI) as useful data replacement procedures; although in some situations, it might be reasonable to use simpler procedures instead. Despite strong support for these statistical techniques, organizational scholars have yet to embrace them. Instead they tend to rely on approaches such as listwise deletion that do not preserve underlying data characteristics, reduce the sample available for statistical analysis, and in some cases, actually exacerbate the potential problems associated with missing data. Although there are certainly remaining questions that can be addressed about missing data techniques, these techniques are also well understood and validated. There remains, however, a strong need for exploration into the nature, causes, and extent of nonresponse in various organizational contexts, such when using online and mobile surveys. Such research could play a useful role in helping researchers avoid nonresponse in organizational settings, as well as extend insight about how best and when to apply validated missing data techniques.

Article

Michael T. Braun, Steve W. J. Kozlowski, and Goran Kuljanin

Multilevel theory (MLT) details how organizational constructs and processes operate and interact within and across levels. MLT focuses on two different inter-level relationships: bottom-up emergence and top-down effects. Emergence is when individuals’ thoughts, feelings, and/or behaviors are shaped by interactions and come to manifest themselves as collective, higher-level phenomena. The resulting higher-level phenomena can be either common, shared states across all individuals (i.e., compositional emergence) or stable, unique, patterned individual-level states (i.e., compilational emergence). Top-down effects are those representing influences from higher levels on the thoughts, feelings, and/or behaviors of individuals or other lower-level units. To date, most theoretical and empirical research has studied the top-down effects of either contextual variables or compositional emerged states. Using predominantly self-report survey methodologies collected at a single time point, this research commonly aggregates lower-level responses to form higher-level representations of variables. Then, a regression-based technique (e.g., random coefficient modeling, structural equation modeling) is used to statistically evaluate the direction and magnitude of the hypothesized effects. The current state of the literature as well as the traditional statistical and methodological approaches used to study MLT create three important knowledge gaps: a lack of understanding of the process of emergence; how top-down and bottom-up relationships change over time; and how inter-individual relationships within collectives form, dissolve, and change. These gaps make designing interventions to fix or improve the functioning of organizational systems incredibly difficult. As such, it is necessary to broaden the theoretical, methodological, and statistical approaches used to study multilevel phenomena in organizations. For example, computational modeling can be used to generate precise, dynamic theory to better understand the short- and long-term implications of multilevel relationships. Behavioral trace data, wearable sensor data, and other novel data collection techniques can be leveraged to capture constructs and processes over time without the drawbacks of survey fatigue or researcher interference. These data can then be analyzed using cutting-edge social network and longitudinal analyses to capture phenomena not readily apparent in hierarchically nested cross-sectional research.

Article

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Business and Management. Please check back later for the full article. Necessary Condition Analysis (NCA) understands cause-effect relations as “necessary but not sufficient.” It means that without the right level of the cause a certain effect cannot occur. This is independent of other causes, thus the necessary condition can be a single bottleneck, critical factor, constraint, disqualifier, or the like that blocks the outcome. This logic differs from conventional additive logic where factors on average contribute to an outcome and can compensate for each other. NCA complements conventional methods such as multiple regression and structural equation modeling. Applying NCA can provide new theoretical and practical insights by identifying the level of a factor that must be put and kept in place for having the outcome. A necessary condition that is not in place guarantees failure of the outcome and makes changes of other contributing factors ineffective. NCA’s data analysis allows for a (multiple) bivariate analysis. NCA puts a ceiling line on the data in an XY-scatter plot. This line separates the space with cases from the space without cases. An empty space in the upper left corner of the scatter plot indicates that the presence of X is necessary for the presence of Y. The larger the empty space relative to the total space, the more X constrains Y, and the more Y is constrained by X, hence the larger the necessity effect size. A point on the ceiling line represents the level Xc of X that is necessary, but not sufficient, for level Yc of Y. NCA is applicable to any discipline. It has already been applied in various business and management fields including strategy, organizational behavior, human research management, operations, finance, innovation, and entrepreneurship. More information about the method and its free R software package can be found on the NCA website.

Article

During the last decade, qualitative comparative analysis (QCA) has become an increasingly popular research approach in the management and business literature. As an approach, QCA consists of both a set of analytical techniques and a conceptual perspective, and the origins of QCA as an analytical technique lie outside the management and business literature. In the 1980s, Charles Ragin, a sociologist and political scientist, developed a systematic, comparative methodology as an alternative to qualitative, case-oriented approaches and to quantitative, variable-oriented approaches. Whereas the analytical technique of QCA was developed outside the management literature, the conceptual perspective underlying QCA has a long history in the management literature, in particular in the form of contingency and configurational theory that have played an important role in management theories since the late 1960s. Until the 2000s, management researchers only sporadically used QCA as an analytical technique. Between 2007 and 2008, a series of seminal articles in leading management journals laid the conceptual, methodological, and empirical foundations for QCA as a promising research approach in business and management. These articles led to a “first” wave of QCA research in management. During the first wave—occurring between approximately 2008 and 2014—researchers successfully published QCA-based studies in leading management journals and triggered important methodological debates, ultimately leading to a revival of the configurational perspective in the management literature. Following the first wave, a “second” wave—between 2014 and 2018—saw a rapid increase in QCA publications across several subfields in management research, the development of methodological applications of QCA, and an expansion of scholarly debates around the nature, opportunities, and future of QCA as a research approach. The second wave of QCA research in business and management concluded with researchers’ taking stock of the plethora of empirical studies using QCA for identifying best practice guidelines and advocating for the rise of a “neo-configurational” perspective, a perspective drawing on set-theoretic logic, causal complexity, and counterfactual analysis. Nowadays, QCA is an established approach in some research areas (e.g., organization theory, strategic management) and is diffusing into several adjacent areas (e.g., entrepreneurship, marketing, and accounting), a situation that promises new opportunities for advancing the analytical technique of QCA as well as configurational thinking and theorizing in the business and management literature. To advance the analytical foundations of QCA, researchers may, for example, advance robustness tests for QCA or focus on issues of endogeneity and omitted variables in QCA. To advance the conceptual foundations of QCA, researchers may, for example, clarify the links between configurational theory and related theoretical perspectives, such as systems theory or complexity theory, or develop theories on the temporal dynamics of configurations and configurational change. Ultimately, after a decade of growing use and interest in QCA and given the unique strengths of this approach for addressing questions relevant to management research, QCA will continue to influence research in business and management.

Article

Qualitative research designs provide future-oriented plans for undertaking research. Designs should describe how to effectively address and answer a specific research question using qualitative data and qualitative analysis techniques. Designs connect research objectives to observations, data, methods, interpretations, and research outcomes. Qualitative research designs focus initially on collecting data to provide a naturalistic view of social phenomena and understand the meaning the social world holds from the point of view of social actors in real settings. The outcomes of qualitative research designs are situated narratives of peoples’ activities in real settings, reasoned explanations of behavior, discoveries of new phenomena, and creating and testing of theories. A three-level framework can be used to describe the layers of qualitative research design and conceptualize its multifaceted nature. Note, however, that qualitative research is a flexible and not fixed process, unlike conventional positivist research designs that are unchanged after data collection commences. Flexibility provides qualitative research with the capacity to alter foci during the research process and make new and emerging discoveries. The first or methods layer of the research design process uses social science methods to rigorously describe organizational phenomena and provide evidence that is useful for explaining phenomena and developing theory. Description is done using empirical research methods for data collection including case studies, interviews, participant observation, ethnography, and collection of texts, records, and documents. The second or methodological layer of research design offers three formal logical strategies to analyze data and address research questions: (a) induction to answer descriptive “what” questions; (b) deduction and hypothesis testing to address theory oriented “why” questions; and (c) abduction to understand questions about what, how, and why phenomena occur. The third or social science paradigm layer of research design is formed by broad social science traditions and approaches that reflect distinct theoretical epistemologies—theories of knowledge—and diverse empirical research practices. These perspectives include positivism, interpretive induction, and interpretive abduction (interpretive science). There are also scholarly research perspectives that reflect on and challenge or seek to change management thinking and practice, rather than producing rigorous empirical research or evidence based findings. These perspectives include critical research, postmodern research, and organization development. Three additional issues are important to future qualitative research designs. First, there is renewed interest in the value of covert research undertaken without the informed consent of participants. Second, there is an ongoing discussion of the best style to use for reporting qualitative research. Third, there are new ways to integrate qualitative and quantitative data. These are needed to better address the interplay of qualitative and quantitative phenomena that are both found in everyday discourse, a phenomenon that has been overlooked.

Article

Hugo Pinto and Manuel Fernández-Esquinas

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Business and Management. Please check back later for the full article. In order to obtain competitive advantages, firms have to make use of knowledge as the main element of their capacities for innovation and management. Innovation is a complex and collective process, resulting from different contexts, socioeconomic aspects, and specificities of firms that create nuanced management and policy implications. Sources of knowledge are varied, as each firm interacts with multiple types of actors to pursue its mission: partners and strategic allies, suppliers, customers, competitors, specialized organizations such as knowledge intensive business services, universities, technology centers, public research organizations, innovation intermediaries, and public administration bodies. Different kinds of knowledge are relevant for the firms, both tacit and codified knowledge. Knowledge needs to be translated into capacity to act. Knowledge generation and absorption can be understood as two sides of the same coin. It is necessary to take into account factors that shape both facets and the relationship between the production, transfer, and valorization of knowledge. Influential factors concerning knowledge characteristics are related to tacitness and to the existing knowledge base. Contextual factors, such as the economic sector, technological intensity, the local buzz, and the insertion in global value chains are essential as environmental enablers for generating and absorbing knowledge. Finally, the internal characteristics of the firm are of crucial relevance, namely the existing innovation culture, leadership, and also the size or internal R&D capacities. These factors reinforce the dynamic capacities of the firm and the decision to engage in open innovation strategies or to give more importance to strategies that protect and codify knowledge, such as industrial property rights.

Article

Don H. Kluemper

The use of surveys is prevalent in academic research in general, and particularly in business and management. As an example, self-report surveys alone are the most common data source in the social sciences. Survey design, however, involves a wide range of methodological decisions, each with its own strengths, limitations, and trade-offs. There are a broad set of issues associated with survey design, ranging from a breadth of strategic concerns to nuanced approaches associated with methodological and design alternatives. Further, decision points associated with survey design involve a series of trade-offs, as the strengths of a particular approach might come with inherent weaknesses. Surveys are couched within a broader scientific research process. First and foremost, the problem being studied should have sufficient impact, should be driven by a strong theoretical rationale, should employ rigorous research methods and design appropriate to test the theory, and should use appropriate analyses and employ best practices such that there is confidence in the scientific rigor of any given study and thus confidence in the results. Best practice requires balancing a range of methodological concerns and trade-offs that relate to the development of robust survey designs, including making causal inferences; internal, external, and ecological validity; common method variance; choice of data sources; multilevel issues; measure selection, modification, and development; appropriate use of control variables; conducting power analysis; and methods of administration. There are salient concerns regarding the administration of surveys, including increasing response rates as well as minimizing responses that are careless and/or reflect social desirability. Finally, decision points arise after surveys are administered, including missing data, organization of research materials, questionable research practices, and statistical considerations. A comprehensive understanding of this array of interrelated survey design issues associated with theory, study design, implementation, and analysis enhances scientific rigor.