1-9 of 9 Results  for:

  • Research Methods x
  • Organization Theory x
Clear all

Article

Citizen Science and Crowd Science  

Marion K. Poetz and Henry Sauermann

Citizen science and crowd science (CS) projects involve members of the public who participate in response to an open call and who can perform a broad range of research tasks. Scholars using the citizen science lens focus on the fact that many participants do not have formal scientific training, while scholars using the crowd science lens emphasize that participants are often recruited through an open call. CS projects have resulted in large-scale data sets, novel discoveries, and top-tier publications (i.e., scientific impact), but they can also have large societal and practical impacts by increasing the relevance of research or accomplishing other objectives such as science education and building awareness. The diverse landscape of CS projects reflects five underlying paradigms that capture different rationales for involving crowds and that require different organizational setups: crowd volume, broadcast search, user crowds, community production, and crowd wisdom. Within each CS project, the breadth of crowd involvement can be mapped along stages of the research process (e.g., formulating research questions, designing methods, collecting data). Within each stage, the depth of crowd involvement can be mapped with respect to four general types of contributions: activities, knowledge, resources, and decisions. Common challenges of CS projects relate to recruiting and engaging participants, organizational design, resource requirements, and ensuring the quality of contributions. Opportunities for future research include research on the costs and boundary conditions of CS as well as systematic assessments of different aspects of performance and how they relate to project characteristics. Future research should also investigate the role of artificial intelligence both as worker who can take over tasks from crowd members and as manager who can help organize CS activities.

Article

Constructs and Measures in Stakeholder Management Research  

James Mattingly and Nicholas Bailey

Stakeholder strategies, or firms’ approaches to stakeholder management, may have a significant impact on firms’ long-term prosperity and, thereby, on their life chances, as established in the stakeholder view of the firm. A systematic literature review surveyed the contemporary body of quantitative empirical research that has examined firm-level activities relevant to stakeholder management, corporate social responsibility, and corporate social performance, because these three constructs are often conflated in literature. A search uncovered 99 articles published in 22 journals during the 10-year period from 2010 to 2019. Most studies employed databases reporting environmental, social, and governance (ESG) ratings, originally created for use in socially responsible investing and corporate risk assessment, but others employed content analysis of texts and primary surveys. Examination revealed a key difference in the scoring of data, in that some studies aggregated numerous indicators into a single composite index to indicate levels of stakeholder management, and other studies scored more articulated constructs. Articulated constructs provided richer observations, including governance and structural arrangements most likely to provide both stakeholder benefits and protections. Also observed were constraining influences of managerial and market myopia, sustaining influences from resilience and complexity frameworks, and recognition that contextual variables are contingencies having impact in recognizing the efficacy of stakeholder management strategies.

Article

Content and Text Analysis Methods for Organizational Research  

Rhonda K. Reger and Paula A. Kincaid

Content analysis is to words (and other unstructured data) as statistics is to numbers (also called structured data)—an umbrella term encompassing a range of analytic techniques. Content analyses range from purely qualitative analyses, often used in grounded theorizing and case-based research to reduce interview data into theoretically meaningful categories, to highly quantitative analyses that use concept dictionaries to convert words and phrases into numerical tables for further quantitative analysis. Common specialized types of qualitative content analysis include methods associated with grounded theorizing, narrative analysis, discourse analysis, rhetorical analysis, semiotic analysis, interpretative phenomenological analysis, and conversation analysis. Major quantitative content analyses include dictionary-based approaches, topic modeling, and natural language processing. Though specific steps for specific types of content analysis vary, a prototypical content analysis requires eight steps beginning with defining coding units and ending with assessing the trustworthiness, reliability, and validity of the overall coding. Furthermore, while most content analysis evaluates textual data, some studies also analyze visual data such as gestures, videos and pictures, and verbal data such as tone. Content analysis has several advantages over other data collection and analysis methods. Content analysis provides a flexible set of tools that are suitable for many research questions where quantitative data are unavailable. Many forms of content analysis provide a replicable methodology to access individual and collective structures and processes. Moreover, content analysis of documents and videos that organizational actors produce in the normal course of their work provides unobtrusive ways to study sociocognitive concepts and processes in context, and thus avoids some of the most serious concerns associated with other commonly used methods. Content analysis requires significant researcher judgment such that inadvertent biasing of results is a common concern. On balance, content analysis is a promising activity for the rigorous exploration of many important but difficult-to-study issues that are not easily studied via other methods. For these reasons, content analysis is burgeoning in business and management research as researchers seek to study complex and subtle phenomena.

Article

Corporate Ethics  

Thomas Donaldson and Diana C. Robertson

Serious research into corporate ethics is nearly half a century old. Two approaches have dominated research; one is normative, the other empirical. The former, the normative approach, develops theories and norms that are prescriptive, that is, ones that are designed to guide corporate behavior. The latter, the empirical approach, investigates the character and causes of corporate behavior by examining corporate governance structures, policies, corporate relationships, and managerial behavior with the aim of explaining and predicting corporate behavior. Normative research has been led by scholars in the fields of moral philosophy, theology and legal theory. Empirical research has been led by scholars in the fields of sociology, psychology, economics, marketing, finance, and management. While utilizing distinct methods, the two approaches are symbiotic. Ethical and legal theory are irrelevant without factual context. Similarly, empirical theories are sterile unless translated into corporate guidance. The following description of the history of research in corporate ethics demonstrates that normative research methods are indispensable tools for empirical inquiry, even as empirical methods are indispensable tools for normative inquiry.

Article

Experiments in Organization and Management Research  

Alex Bitektine, Jeff Lucas, Oliver Schilke, and Brad Aeon

Experiments randomly assign actors (e.g., people, groups, and organizations) to different conditions and assess the effects on a dependent variable. Random assignment allows for the control of extraneous factors and the isolation of causal effects, making experiments especially valuable for testing theorized processes. Although experiments have long remained underused in organizational theory and management research, the popularity of experimental methods has seen rapid growth in the 21st century. Gatekeepers sometimes criticize experiments for lacking generalizability, citing their artificial settings or non-representative samples. To address this criticism, a distinction is drawn between an applied research logic and a fundamental research logic. In an applied research logic, experimentalists design a study with the goal of generalizing findings to specific settings or populations. In a fundamental research logic, by contrast, experimentalists seek to design studies relevant to a theory or a fundamental mechanism rather than to specific contexts. Accordingly, the issue of generalizability does not so much boil down to whether an experiment is generalizable, but rather whether the research design matches the research logic of the study. If the goal is to test theory (i.e., a fundamental research logic), then asking the question of whether the experiment generalizes to certain settings and populations is largely irrelevant.

Article

Institutional Logics  

Heather A. Haveman and Gillian Gualtieri

Research on institutional logics surveys systems of cultural elements (values, beliefs, and normative expectations) by which people, groups, and organizations make sense of and evaluate their everyday activities, and organize those activities in time and space. Although there were scattered mentions of this concept before 1990, this literature really began with the 1991 publication of a theory piece by Roger Friedland and Robert Alford. Since that time, it has become a large and diverse area of organizational research. Several books and thousands of papers and book chapters have been published on this topic, addressing institutional logics in sites as different as climate change proceedings of the United Nations, local banks in the United States, and business groups in Taiwan. Several intellectual precursors to institutional logics provide a detailed explanation of the concept and the theory surrounding it. These literatures developed over time within the broader framework of theory and empirical work in sociology, political science, and anthropology. Papers published in ten major sociology and management journals in the United States and Europe (between 1990 and 2015) provide analysis and help to identify trends in theoretical development and empirical findings. Evaluting these trends suggest three gentle corrections and potentially useful extensions to the literature help to guide future research: (1) limiting the definition of institutional logic to cultural-cognitive phenomena, rather than including material phenomena; (2) recognizing both “cold” (purely rational) cognition and “hot” (emotion-laden) cognition; and (3) developing and testing a theory (or multiple related theories), meaning a logically interconnected set of propositions concerning a delimited set of social phenomena, derived from assumptions about essential facts (axioms), that details causal mechanisms and yields empirically testable (falsifiable) hypotheses, by being more consistent about how we use concepts in theoretical statements; assessing the reliability and validity of our empirical measures; and conducting meta-analyses of the many inductive studies that have been published, to develop deductive theories.

Article

Longitudinal Designs for Organizational Research  

James M. Diefendorff, Faith Lee, and Daniel Hynes

Longitudinal research involves collecting data from the same entities on two or more occasions. Almost all organizational theories outline a longitudinal process in which one or more variables cause a subsequent change in other variables. However, the majority of empirical studies rely on research designs that do not allow for the proper assessment of change over time or the isolation of causal effects. Longitudinal research begins with longitudinal theorizing. With this in mind, a variety of time-based theoretical concepts are helpful for conceptualizing how a variable is expected to change. This includes when variables are expected to change, the form or shape of the change, and how big the change is expected to be. To aid in the development of causal hypotheses, researchers should consider the history of the independent and dependent variables (i.e., how they may have been changing before the causal effect is examined), the causal lag between the variables (i.e., how long it takes for the dependent variable to start changing as a result of the independent variable), as well as the permanence, magnitude, and rate of the hypothesized change in the dependent variable. After hypotheses have been formulated, researchers can choose among various research designs, including experimental, concurrent or lagged correlational, or time series. Experimental designs are best suited for inferring causality, while time series designs are best suited for capturing the specific timing and form of change. Lagged correlation designs are useful for examining the direction and magnitude of change in a variable between measurements. Concurrent correlational designs are the weakest for inferring change or causality. Theory should dictate the choice of design, and designs can be modified and/or combined as needed to address the research question(s) at hand. Next, researchers should pay attention to their sample selection, the operationalization of constructs, and the frequency and timing of measures. The selected sample must be expected to experience the theorized change, and measures should be gathered as often as is necessary to represent the theorized change process (i.e., when the change occurs, how long it takes to unfold, and how long it lasts). Experimental manipulations should be strong enough to produce theorized effects and measured variables should be sensitive enough to capture meaningful differences between individuals and also within individuals over time. Finally, the analytic approach should be chosen based on the research design and hypotheses. Analyses can range from t-test and analysis of variance for experimental designs, to correlation and regression for lagged and concurrent designs, to a variety of advanced analyses for time series designs, including latent growth curve modeling, coupled latent growth curve modeling, cross-lagged modeling, and latent change score modeling. A point worth noting is that researchers sometimes label research designs by the statistical analysis commonly paired with the design. However, data generated from a particular design can often be analyzed using a variety of statistical procedures, so it is important to clearly distinguish the research design from the analytic approach.

Article

Qualitative Comparative Analysis in Business and Management Research  

Johannes Meuer and Peer C. Fiss

During the last decade, qualitative comparative analysis (QCA) has become an increasingly popular research approach in the management and business literature. As an approach, QCA consists of both a set of analytical techniques and a conceptual perspective, and the origins of QCA as an analytical technique lie outside the management and business literature. In the 1980s, Charles Ragin, a sociologist and political scientist, developed a systematic, comparative methodology as an alternative to qualitative, case-oriented approaches and to quantitative, variable-oriented approaches. Whereas the analytical technique of QCA was developed outside the management literature, the conceptual perspective underlying QCA has a long history in the management literature, in particular in the form of contingency and configurational theory that have played an important role in management theories since the late 1960s. Until the 2000s, management researchers only sporadically used QCA as an analytical technique. Between 2007 and 2008, a series of seminal articles in leading management journals laid the conceptual, methodological, and empirical foundations for QCA as a promising research approach in business and management. These articles led to a “first” wave of QCA research in management. During the first wave—occurring between approximately 2008 and 2014—researchers successfully published QCA-based studies in leading management journals and triggered important methodological debates, ultimately leading to a revival of the configurational perspective in the management literature. Following the first wave, a “second” wave—between 2014 and 2018—saw a rapid increase in QCA publications across several subfields in management research, the development of methodological applications of QCA, and an expansion of scholarly debates around the nature, opportunities, and future of QCA as a research approach. The second wave of QCA research in business and management concluded with researchers’ taking stock of the plethora of empirical studies using QCA for identifying best practice guidelines and advocating for the rise of a “neo-configurational” perspective, a perspective drawing on set-theoretic logic, causal complexity, and counterfactual analysis. Nowadays, QCA is an established approach in some research areas (e.g., organization theory, strategic management) and is diffusing into several adjacent areas (e.g., entrepreneurship, marketing, and accounting), a situation that promises new opportunities for advancing the analytical technique of QCA as well as configurational thinking and theorizing in the business and management literature. To advance the analytical foundations of QCA, researchers may, for example, advance robustness tests for QCA or focus on issues of endogeneity and omitted variables in QCA. To advance the conceptual foundations of QCA, researchers may, for example, clarify the links between configurational theory and related theoretical perspectives, such as systems theory or complexity theory, or develop theories on the temporal dynamics of configurations and configurational change. Ultimately, after a decade of growing use and interest in QCA and given the unique strengths of this approach for addressing questions relevant to management research, QCA will continue to influence research in business and management.

Article

Qualitative Designs and Methodologies for Business, Management, and Organizational Research  

Robert P. Gephart and Rohny Saylors

Qualitative research designs provide future-oriented plans for undertaking research. Designs should describe how to effectively address and answer a specific research question using qualitative data and qualitative analysis techniques. Designs connect research objectives to observations, data, methods, interpretations, and research outcomes. Qualitative research designs focus initially on collecting data to provide a naturalistic view of social phenomena and understand the meaning the social world holds from the point of view of social actors in real settings. The outcomes of qualitative research designs are situated narratives of peoples’ activities in real settings, reasoned explanations of behavior, discoveries of new phenomena, and creating and testing of theories. A three-level framework can be used to describe the layers of qualitative research design and conceptualize its multifaceted nature. Note, however, that qualitative research is a flexible and not fixed process, unlike conventional positivist research designs that are unchanged after data collection commences. Flexibility provides qualitative research with the capacity to alter foci during the research process and make new and emerging discoveries. The first or methods layer of the research design process uses social science methods to rigorously describe organizational phenomena and provide evidence that is useful for explaining phenomena and developing theory. Description is done using empirical research methods for data collection including case studies, interviews, participant observation, ethnography, and collection of texts, records, and documents. The second or methodological layer of research design offers three formal logical strategies to analyze data and address research questions: (a) induction to answer descriptive “what” questions; (b) deduction and hypothesis testing to address theory oriented “why” questions; and (c) abduction to understand questions about what, how, and why phenomena occur. The third or social science paradigm layer of research design is formed by broad social science traditions and approaches that reflect distinct theoretical epistemologies—theories of knowledge—and diverse empirical research practices. These perspectives include positivism, interpretive induction, and interpretive abduction (interpretive science). There are also scholarly research perspectives that reflect on and challenge or seek to change management thinking and practice, rather than producing rigorous empirical research or evidence based findings. These perspectives include critical research, postmodern research, and organization development. Three additional issues are important to future qualitative research designs. First, there is renewed interest in the value of covert research undertaken without the informed consent of participants. Second, there is an ongoing discussion of the best style to use for reporting qualitative research. Third, there are new ways to integrate qualitative and quantitative data. These are needed to better address the interplay of qualitative and quantitative phenomena that are both found in everyday discourse, a phenomenon that has been overlooked.