1-20 of 20 Results

  • Keywords: research x
Clear all

Article

Don H. Kluemper

The use of surveys is prevalent in academic research in general, and particularly in business and management. As an example, self-report surveys alone are the most common data source in the social sciences. Survey design, however, involves a wide range of methodological decisions, each with its own strengths, limitations, and trade-offs. There are a broad set of issues associated with survey design, ranging from a breadth of strategic concerns to nuanced approaches associated with methodological and design alternatives. Further, decision points associated with survey design involve a series of trade-offs, as the strengths of a particular approach might come with inherent weaknesses. Surveys are couched within a broader scientific research process. First and foremost, the problem being studied should have sufficient impact, should be driven by a strong theoretical rationale, should employ rigorous research methods and design appropriate to test the theory, and should use appropriate analyses and employ best practices such that there is confidence in the scientific rigor of any given study and thus confidence in the results. Best practice requires balancing a range of methodological concerns and trade-offs that relate to the development of robust survey designs, including making causal inferences; internal, external, and ecological validity; common method variance; choice of data sources; multilevel issues; measure selection, modification, and development; appropriate use of control variables; conducting power analysis; and methods of administration. There are salient concerns regarding the administration of surveys, including increasing response rates as well as minimizing responses that are careless and/or reflect social desirability. Finally, decision points arise after surveys are administered, including missing data, organization of research materials, questionable research practices, and statistical considerations. A comprehensive understanding of this array of interrelated survey design issues associated with theory, study design, implementation, and analysis enhances scientific rigor.

Article

James A. Muncy and Alice M. Muncy

Business research is conducted by both businesspeople, who have informational needs, and scholars, whose field of study is business. Though some of the specifics as to how research is conducted differs between scholarly research and applied research, the general process they follow is the same. Business research is conducted in five stages. The first stage is problem formation where the objectives of the research are established. The second stage is research design. In this stage, the researcher identifies the variables of interest and possible relationships among those variables, decides on the appropriate data source and measurement approach, and plans the sampling methodology. It is also within the research design stage that the role that time will play in the study is determined. The third stage is data collection. Researchers must decide whether to outsource the data collection process or collect the data themselves. Also, data quality issues must be addressed in the collection of the data. The fourth stage is data analysis. The data must be prepared and cleaned. Statistical packages or programs such as SAS, SPSS, STATA, and R are used to analyze quantitative data. In the cases of qualitative data, coding, artificial intelligence, and/or interpretive analysis is employed. The fifth stage is the presentation of results. In applied business research, the results are typically limited in their distribution and they must be addressed to the immediate problem at hand. In scholarly business research, the results are intended to be widely distributed through journals, books, and conferences. As a means of quality control, scholarly research usually goes through a double-blind review process before it is published.

Article

Thomas Greckhamer and Sebnem Cilesiz

Qualitative research is an umbrella term that is typically used in contrast to quantitative research and captures research approaches that predominantly rely on collecting and analyzing qualitative data (i.e., data in the form of words, still or moving images, and artifacts). Qualitative research encompasses a wide range of research approaches with different philosophical and theoretical foundations and empirical procedures. Different assumptions about reality and knowledge underlying these diverse approaches guide researchers with respect to epistemological and methodological questions and inform their choices regarding research questions, data collection, data analysis, and the writing of research accounts. While at present a few dominant approaches are commonly used by researchers, a rich repertoire of qualitative approaches is available to management researchers that has the potential to facilitate deeper and broader insights into management phenomena.

Article

Intersectionality is a critical framework that provides us with the mindset and language for examining interconnections and interdependencies between social categories and systems. Intersectionality is relevant for researchers and for practitioners because it enhances analytical sophistication and offers theoretical explanations of the ways in which heterogeneous members of specific groups (such as women) might experience the workplace differently depending on their ethnicity, sexual orientation, and/or class and other social locations. Sensitivity to such differences enhances insight into issues of social justice and inequality in organizations and other institutions, thus maximizing the chance of social change. The concept of intersectional locations emerged from the racialized experiences of minority ethnic women in the United States. Intersectional thinking has gained increased prominence in business and management studies, particularly in critical organization studies. A predominant focus in this field is on individual subjectivities at intersectional locations (such as examining the occupational identities of minority ethnic women). This emphasis on individuals’ experiences and within-group differences has been described variously as “content specialization” or an “intracategorical approach.” An alternate focus in business and management studies is on highlighting systematic dynamics of power. This encompasses a focus on “systemic intersectionality” and an “intercategorical approach.” Here, scholars examine multiple between-group differences, charting shifting configurations of inequality along various dimensions. As a critical theory, intersectionality conceptualizes knowledge as situated, contextual, relational, and reflective of political and economic power. Intersectionality tends to be associated with qualitative research methods due to the central role of giving voice, elicited through focus groups, narrative interviews, action research, and observations. Intersectionality is also utilized as a methodological tool for conducting qualitative research, such as by researchers adopting an intersectional reflexivity mindset. Intersectionality is also increasingly associated with quantitative and statistical methods, which contribute to intersectionality by helping us understand and interpret the individual, combined (additive or multiplicative) effects of various categories (privileged and disadvantaged) in a given context. Future considerations for intersectionality theory and practice include managing its broad applicability while attending to its sociopolitical and emancipatory aims, and theoretically advancing understanding of the simultaneous forces of privilege and penalty in the workplace.

Article

Sampling refers to the process used to identify and select cases for analysis (i.e., a sample) with the goal of drawing meaningful research conclusions. Sampling is integral to the overall research process as it has substantial implications on the quality of research findings. Inappropriate sampling techniques can lead to problems of interpretation, such as drawing invalid conclusions about a population. Whereas sampling in quantitative research focuses on maximizing the statistical representativeness of a population by a chosen sample, sampling in qualitative research generally focuses on the complete representation of a phenomenon of interest. Because of this core difference in purpose, many sampling considerations differ between qualitative and quantitative approaches despite a shared general purpose: careful selection of cases to maximize the validity of conclusions. Achieving generalizability, the extent to which observed effects from one study can be used to predict the same and similar effects in different contexts, drives most quantitative research. Obtaining a representative sample with characteristics that reflect a targeted population is critical to making accurate statistical inferences, which is core to such research. Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias. In either case, careful planning is required to estimate an appropriate sample size before the start of data collection. In contrast to generalizability, transferability, the degree to which study findings can be applied to other contexts, is the goal of most qualitative research. This approach is more concerned with providing information to readers and less concerned with making generalizable broad claims for readers. Similar to quantitative research, choosing a population and sample are critical for qualitative research, to help readers determine likelihood of transfer, yet representativeness is not as crucial. Sample size determination in qualitative research is drastically different from that of quantitative research, because sample size determination should occur during data collection, in an ongoing process in search of saturation, which focuses on achieving theoretical completeness instead of maximizing the quality of statistical inference. Theoretically speaking, although quantitative and qualitative research have distinct statistical underpinnings that should drive different sampling requirements, in practice they both heavily rely on non-probability samples, and the implications of non-probability sampling is often not well understood. Although non-probability samples do not automatically generate poor-quality data, incomplete consideration of case selection strategy can harm the validity of research conclusions. The nature and number of cases collected must be determined cautiously to respect research goals and the underlying scientific paradigm employed. Understanding the commonalities and differences in sampling between quantitative and qualitative research can help researchers better identify high-quality research designs across paradigms.

Article

Eric Volmar and Kathleen M. Eisenhardt

Theory building from case studies is a research strategy that combines grounded theory building with case studies. Its purpose is to develop novel, accurate, parsimonious, and robust theory that emerges from and is grounded in data. Case research is well-suited to address “big picture” theoretical gaps and dilemmas, particularly when existing theory is inadequate. Further, this research strategy is particularly useful for answering questions of “how” through its deep and longitudinal immersion in a focal phenomenon. The process of conducting case study research includes a thorough literature review to identify an appropriate and compelling research question, a rigorous study design that involves artful theoretical sampling, rich and complete data collection from multiple sources, and a creative yet systematic grounded theory building process to analyze the cases and build emergent theory about significant phenomena. Rigorous theory building case research is fundamentally centered on strong emergent theory with precise theoretical logic and robust grounding in empirical data. Not surprisingly then, theory building case research is disproportionately represented among the most highly cited and award-winning research.

Article

Guclu Atinc and Marcia J. Simmering

The use of control variables to improve inferences about statistical relationships in data is ubiquitous in management research. In both the micro- and macro-subfields of management, control variables are included to remove confounding variance and provide researchers with an enhanced ability to interpret findings. Scholars have explored the theoretical underpinnings and statistical effects of including control variables in a variety of statistical analyses. Further, a robust literature surrounding the best practices for their use and reporting exists. Specifically, researchers have been directed to report more detailed information in manuscripts regarding the theoretical rationale for the use of control variables, their measurement, and their inclusion in statistical analysis. Moreover, recent research indicates the value of removing control variables in many cases. Although there is evidence that articles recommending best practices for control variables use are increasingly being cited, there is also still a lag in researchers following recommendations. Finally, there are avenues for valuable future research on control variables.

Article

Alex Bitektine, Jeff Lucas, Oliver Schilke, and Brad Aeon

Experiments randomly assign actors (e.g., people, groups, and organizations) to different conditions and assess the effects on a dependent variable. Random assignment allows for the control of extraneous factors and the isolation of causal effects, making experiments especially valuable for testing theorized processes. Although experiments have long remained underused in organizational theory and management research, the popularity of experimental methods has seen rapid growth in the 21st century. Gatekeepers sometimes criticize experiments for lacking generalizability, citing their artificial settings or non-representative samples. To address this criticism, a distinction is drawn between an applied research logic and a fundamental research logic. In an applied research logic, experimentalists design a study with the goal of generalizing findings to specific settings or populations. In a fundamental research logic, by contrast, experimentalists seek to design studies relevant to a theory or a fundamental mechanism rather than to specific contexts. Accordingly, the issue of generalizability does not so much boil down to whether an experiment is generalizable, but rather whether the research design matches the research logic of the study. If the goal is to test theory (i.e., a fundamental research logic), then asking the question of whether the experiment generalizes to certain settings and populations is largely irrelevant.

Article

Ann Peng, Rebecca Mitchell, and John M. Schaubroeck

In recent years scholars of abusive supervision have expanded the scope of outcomes examined and have advanced new psychological and social processes to account for these and other outcomes. Besides the commonly used relational theories such as justice theory and social exchange theory, recent studies have more frequently drawn from theories about emotion to describe how abusive supervision influences the behavior, attitudes, and well-being of both the victims and the perpetrators. In addition, an increasing number of studies have examined the antecedents of abusive supervision. The studied antecedents include personality, behavioral, and situational characteristics of the supervisors and/or the subordinates. Studies have reported how characteristics of the supervisor and that of the focal victim interact to determining abuse frequency. Formerly postulated outcomes of abusive supervision (e.g., subordinate performance) have also been identified as antecedents of abusive supervision. This points to a need to model dynamic and mutually reciprocal processes between leader abusive behavior and follower responses with longitudinal data. Moreover, extending prior research that has exclusively focused on the victim’s perspective, scholars have started to take the supervisor’s perspective and the lens of third-parties, such as victims’ coworkers, to understand the broad impact of abusive supervision. Finally, a small number of studies have started to model abusive supervision as a multilevel phenomenon. These studies have examined a group aggregated measure of abusive supervision, examining its influence as an antecedent of individual level outcomes and as a moderator of relationships between individuals’ experiences of abusive supervision and personal outcomes. More research could be devoted to establishing the causal effects of abusive supervision and to developing organizational interventions to reduce abusive supervision.

Article

Cristina Chaminade and Bengt-Åke Lundvall

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Business and Management. Please check back later for the full article. Scientific advance and innovation are major sources of economic growth and are crucial for making social and environmental development sustainable. A critical question is if private enterprises invest sufficiently in research and development and, if not, to what degree and how governments should engage in the support of science and innovation. While neoclassical economists point to market failure as the main rationale for innovation policy, evolutionary economists point to the role of government in building stronger innovation systems and creating wider opportunities for innovation. Research shows that the transmission mechanisms between scientific advance and innovation are complex and indirect. There are other equally important sources of innovation, including experience-based learning. Innovation is increasingly seen as a systemic process where the feedback from users needs to be taken into account when designing public policy. Science and innovation policy may aim at accelerating knowledge production along well-established trajectories or at giving new direction to the production and use of knowledge. It may be focused exclusively on economic growth, or it may give attention to the impact on social inclusion and the natural environment. An emerging topic is the extent to which national perspectives continue to be relevant in a globalizing learning economy facing multiple global complex challenges, including the issue of global warming. Scholars point to a movement toward transformative innovation policy and global knowledge sharing as a response to current challenges.

Article

Critical thinking is more than just fault-finding—it involves a range of thinking processes, including interpreting, analyzing, evaluating, inferencing, explaining, and self-regulating. The concept of critical thinking emerged from the field of education; however, it can, and should, be applied to other areas, particularly to research. Like most skills, critical thinking can be developed. However, critical thinking is also a mindset or a disposition that enables the consistent use and application of critical thought. Critical thinking is vital in business research, because researchers are expected to demonstrate a systematic approach and cogency in the way they undertake and present their studies, especially if they are to be taken seriously and for prospective research users to be persuaded by their findings. Critical thinking can be used in the key stages of many typical business research projects, specifically: the literature review; the use of inductive, deductive, and abductive reasoning and the relevant research design and methodology that follows; and contribution to knowledge. Research is about understanding and explaining phenomena, which is usually the starting point to solve a problem or to take advantage of an opportunity. However, to gain new insights (or to claim to), one needs to know what is already known, which is why many research projects start with a literature review. A literature review is a systematic way of searching and categorizing literature that helps to build the researchers’ confidence that they have identified and recognized prevailing (explicit) knowledge relevant to the development of their research questions. In a literature review, it is the job of the researcher to examine ideas presented through critical thinking and to scrutinize the arguments of the authors. Critical thinking is also clearly crucial for effective reasoning. Reasoning is the way people rationalize and explain. However, in the context of research, the three generally accepted distinct forms of reasoning (inductive, deductive, and abductive) are more analogous to specific approaches to shape how the literature, research questions, methods, and findings all come together. Inductive reasoning is making an inference based on evidence that researchers have in possession and extrapolating what may happen based on the evidence, and why. Deductive reasoning is a form of syllogism, which is an argument based on accepted premises and involves choosing the most appropriate alternative hypotheses. Finally, abductive reasoning is starting with an outcome and working backward to understand how and why, and by collecting data that can subsequently be decoded for significance (i.e., Is the identified factor directly related to the outcome?) and clarified for meaning (i.e., How did it contribute to the outcome?). Also, critical thinking is crucial in the design of the research method, because it justifies the researchers’ plan and action in collecting data that are credible, valid, and reliable. Finally, critical thinking also plays a role when researchers make arguments based on their research findings to ensure that claims are grounded in the evidence and the procedures.

Article

Tom Hockaday and Andrea Piccaluga

University technology transfer (UTT) has been growing in importance for many decades and is of increasing importance to university leadership, university researchers, research funding agencies, and government policy makers. It is of interest to academic researchers in the fields of business management, economics, innovation, geography, and public policy. UTT is a subset of the broader field of technology transfer, and it involves the transfer of university research results from the university to business so that the business can invest in the development of products and services that benefit society. The research results can arise from any academic discipline, are not limited to a particular definition of technology, and can be transferred to existing and new for-profit and not-for-profit organizations. The core activity involves licensing patent applications and other intellectual property to existing companies and establishing new companies that raise investment finance, to develop the early-stage research outputs into new products and services. In recent decades, research universities have set up technology transfer offices (TTOs) to manage their UTT activities. TTOs adopt a project management approach to supporting university researchers who wish to transfer the results of their research to business. Project stages include identifying, evaluating, protecting, marketing, deal making, and post-deal management. TTOs are also involved in other activities, beyond patenting, licensing, and entrepreneurship, which generate positive impact on society. Measuring and evaluating UTT is a topic of continuing debate, with an early focus on activity metrics developing into a more sophisticated assessment of the impact of university research outputs on society. Current issues in UTT involve understanding the position of UTT in the broader area of research impact, as well as funding and organization models for UTT within a university. The COVID-19 global crisis is highlighting the importance of university research and its transfer out to organizations that develop and deliver products and services that benefit society. It has further emphasized the importance of UTT as an activity where much more has to be researched and understood in order to maximize the benefits for society of all the activities performed by universities.

Article

Jason L. Huang and Zhonghao Wang

Careless responding, also known as insufficient effort responding, refers to survey/test respondents providing random, inattentive, or inconsistent answers to question items due to lack of effort in conforming to instructions, interpreting items, and/or providing accurate responses. Researchers often use these two terms interchangeably to describe deviant behaviors in survey/test responding that threaten data quality. Careless responding threatens the validity of research findings by bringing in random and systematic errors. Specifically, careless responding can reduce measurement reliability, while under specific circumstances it can also inflate the substantive relations between variables. Numerous factors can explain why careless responding happens (or does not happen), such as individual difference characteristics (e.g., conscientiousness), survey characteristics (e.g., survey length), and transient psychological states (e.g., positive and negative affect). To identify potential careless responding, researchers can use procedural detection methods and post hoc statistical methods. For example, researchers can insert detection items (e.g., infrequency items, instructed response items) into the questionnaire, monitor participants’ response time, and compute statistical indices, such as psychometric antonym/synonym, Mahalanobis distance, individual reliability, individual response variability, and model fit statistics. Application of multiple detection methods would be better able to capture careless responding given convergent evidence. Comparison of results based on data with and without careless respondents can help evaluate the degree to which the data are influenced by careless responding. To handle data contaminated by careless responding, researchers may choose to filter out identified careless respondents, recode careless responses as missing data, or include careless responding as a control variable in the analysis. To prevent careless responding, researchers have tried utilizing various deterrence methods developed from motivational and social interaction theories. These methods include giving warning, rewarding, or educational messages, proctoring the process of responding, and designing user-friendly surveys. Interest in careless responding has been growing not only in business and management but also in other related disciplines. Future research and practice on careless responding in the business and management areas can also benefit from findings in other related disciplines.

Article

Conducting credible and trustworthy research to inform managerial decisions is arguably the primary goal of business and management research. Research design, particularly the various types of experimental designs available, are important building blocks for advancing toward this goal. Key criteria for evaluating research studies are internal validity (the ability to demonstrate causality), statistical conclusion validity (drawing correct conclusions from data), construct validity (the extent to which a study captures the phenomenon of interest), and external validity (the generalizability of results to other contexts). Perhaps most important, internal validity depends on the research design’s ability to establish that the hypothesized cause and outcome are correlated, that variation in them occurs in the correct temporal order, and that alternative explanations of that relationship can be ruled out. Research designs vary greatly, especially in their internal validity. Generally, experiments offer the strongest causal inference, because the causal variables of interest are manipulated by the researchers, and because random assignment makes subjects comparable, such that the sources of variation in the variables of interest can be well identified. Natural experiments can exhibit similar internal validity to the extent that researchers are able to exploit exogenous events creating (quasi-)randomized interventions. When randomization is not available, quasi-experiments aim at approximating experiments by making subjects as comparable as possible based on the best available information. Finally, non-experiments, which are often the only option in business and management research, can still offer useful insights, particularly when changes in the variables of interest can be modeled by adopting longitudinal designs.

Article

Wayne Crawford and Esther Lamarre Jean

Structural equation modelling (SEM) is a family of models where multivariate techniques are used to examine simultaneously complex relationships among variables. The goal of SEM is to evaluate the extent to which proposed relationships reflect the actual pattern of relationships present in the data. SEM users employ specialized software to develop a model, which then generates a model-implied covariance matrix. The model-implied covariance matrix is based on the user-defined theoretical model and represents the user’s beliefs about relationships among the variables. Guided by the user’s predefined constraints, SEM software employs a combination of factor analysis and regression to generate a set of parameters (often through maximum likelihood [ML] estimation) to create the model-implied covariance matrix, which represents the relationships between variables included in the model. Structural equation modelling capitalizes on the benefits of both factor analysis and path analytic techniques to address complex research questions. Structural equation modelling consists of six basic steps: model specification; identification; estimation; evaluation of model fit; model modification; and reporting of results. Conducting SEM analyses requires certain data considerations as data-related problems are often the reason for software failures. These considerations include sample size, data screening for multivariate normality, examining outliers and multicollinearity, and assessing missing data. Furthermore, three notable issues SEM users might encounter include common method variance, subjectivity and transparency, and alternative model testing. First, analyzing common method variance includes recognition of three types of variance: common variance (variance shared with the factor); specific variance (reliable variance not explained by common factors); and error variance (unreliable and inexplicable variation in the variable). Second, SEM still lacks clear guidelines for the modelling process which threatens replicability. Decisions are often subjective and based on the researcher’s preferences and knowledge of what is most appropriate for achieving the best overall model. Finally, reporting alternatives to the hypothesized model is another issue that SEM users should consider when analyzing structural equation models. When testing a hypothesized model, SEM users should consider alternative (nested) models derived from constraining or eliminating one or more paths in the hypothesized model. Alternative models offer several benefits; however, they should be driven and supported by existing theory. It is important for the researcher to clearly report and provide findings on the alternative model(s) tested. Common model-specific issues are often experienced by users of SEM. Heywood cases, nonidentification, and nonpositive definite matrices are among the most common issues. Heywood cases arise when negative variances or squared multiple correlations greater than 1.0 are found in the results. The researcher could resolve this by considering a small plausible value that could be used to constrain the residual. Non-positive definite matrices result from linear dependencies and/or correlations greater than 1.0. To address this, researchers can attempt to ensure all indicator variables are independent, inspect output manually for negative residual variances, evaluate if sample size is appropriate, or re-specify the proposed model. When used properly, structural equation modelling is a powerful tool that allows for the simultaneous testing of complex models.

Article

Steven A. Stewart and Allen C. Amason

Since the earliest days of strategic management research, scholars have sought to measure and model the effects of top managers on organizational performance. A watershed moment in this effort came with the 1984 introduction of Hambrick and Mason’s upper echelon view and their contention that firms are a reflection of their top management teams (TMT). An explosion of research followed and hundreds, if not thousands, of manuscripts have since been published on the subject. While a number of excellent reviews of this extensive literature exist, a relative few have asked questions about the overall state and future of the field. We undertook this assessment in an effort to answer some key questions. Are we still making progress on the big questions that gave rise to the upper echelon view, or have we reached a point of diminishing returns with this stream of research? If we are at an inflection point, what are the issues that should drive future inquiry about top management teams?

Article

Qualitative research designs provide future-oriented plans for undertaking research. Designs should describe how to effectively address and answer a specific research question using qualitative data and qualitative analysis techniques. Designs connect research objectives to observations, data, methods, interpretations, and research outcomes. Qualitative research designs focus initially on collecting data to provide a naturalistic view of social phenomena and understand the meaning the social world holds from the point of view of social actors in real settings. The outcomes of qualitative research designs are situated narratives of peoples’ activities in real settings, reasoned explanations of behavior, discoveries of new phenomena, and creating and testing of theories. A three-level framework can be used to describe the layers of qualitative research design and conceptualize its multifaceted nature. Note, however, that qualitative research is a flexible and not fixed process, unlike conventional positivist research designs that are unchanged after data collection commences. Flexibility provides qualitative research with the capacity to alter foci during the research process and make new and emerging discoveries. The first or methods layer of the research design process uses social science methods to rigorously describe organizational phenomena and provide evidence that is useful for explaining phenomena and developing theory. Description is done using empirical research methods for data collection including case studies, interviews, participant observation, ethnography, and collection of texts, records, and documents. The second or methodological layer of research design offers three formal logical strategies to analyze data and address research questions: (a) induction to answer descriptive “what” questions; (b) deduction and hypothesis testing to address theory oriented “why” questions; and (c) abduction to understand questions about what, how, and why phenomena occur. The third or social science paradigm layer of research design is formed by broad social science traditions and approaches that reflect distinct theoretical epistemologies—theories of knowledge—and diverse empirical research practices. These perspectives include positivism, interpretive induction, and interpretive abduction (interpretive science). There are also scholarly research perspectives that reflect on and challenge or seek to change management thinking and practice, rather than producing rigorous empirical research or evidence based findings. These perspectives include critical research, postmodern research, and organization development. Three additional issues are important to future qualitative research designs. First, there is renewed interest in the value of covert research undertaken without the informed consent of participants. Second, there is an ongoing discussion of the best style to use for reporting qualitative research. Third, there are new ways to integrate qualitative and quantitative data. These are needed to better address the interplay of qualitative and quantitative phenomena that are both found in everyday discourse, a phenomenon that has been overlooked.

Article

Paulina Junni and Satu Teerikangas

There are many types of mergers and acquisitions (M&A), be they a minority acquisition to explore a potential high growth emerging market, a takeover of a financially distressed firm with the aim of turning it around, or a private equity firm seeking short- to medium-term returns. The terms “merger” and “acquisition” are often used interchangeably, even though they have distinct denotations: In an acquisition, the acquirer purchases the majority of the shares (over 50%) of another company (the “target”) or parts of it (e.g., a business unit or a division). In a merger, a new company is formed in which the merging parties share broadly equal ownership. The term “merger” is often used strategically by acquirers to alleviate fears and send out a message of friendly combination to employees. In terms of transaction numbers, the majority of M&A transactions are acquisitions, whereas mega-merger deals gain media attention owing to transaction size. While M&A motives, acquirer types, and dynamics differ, most M&A share the aim of generating value from the transaction in some form. Yet a prevalent dilemma in the M&A practice and literature is that M&A often fail to deliver the envisioned benefits. Reasons for negative acquirer performance stem from overestimating potential synergies and paying high premiums for targets pre-deal. Another problem lies in securing post-deal value creation. Post-deal challenges relate to optimal integration speed, the degree of integration, change, or integration management, communication, resource and knowledge sharing, employee motivation and turnover, and cultural integration. Researchers are calling for more research on how pre-deal processes such as target evaluation and negotiations influence M&A performance. A closer look at this literature, though, highlights several controversies. First, the literature often lacks precision when it comes to defining M&A. We call for future research to be explicit concerning the type of merger or acquisition transaction, and the organizational contexts of the acquiring and target firms. Second, we are still lacking robust and unified frameworks that explain M&A occurrence and performance. One of the reasons for this is that the literature on M&A has developed in different disciplines, focusing on either pre- or post-deal aspects. This has resulted in a “silo” effect with a limited understanding about the combined effects of financial, strategic, organizational, and cultural factors in the pre- and post-deal phases on M&A performance. Third, M&A studies have failed to critically scrutinize the M&A phenomenon, including aspects such as power, politics, and managerial drivers. Fourth, scholars have tended to focus on single, isolated M&A. We call for future research on M&A programs and M&A as part of broader corporate strategies. Finally, the study of M&A has suffered from a managerial bias, with insufficient attention paid to the rank and file, such as engineers, or marketing or administrative employees. We therefore call for future research that takes a broader view on actors involved in M&A, placing a greater emphasis on individuals’ roles and practices.

Article

Joel Koopman and Nikolaos Dimotakis

Experience sampling is a method aimed primarily at examining within-individual covariation of transient phenomena utilizing repeated measures. It can be applied to test nuanced predictions of extant theories and can provide insights that are otherwise difficult to obtain. It does so by examining the phenomena of interest close to where they occur and thus avoiding issues with recall and similar concerns. Data collected through the experience sampling method (ESM) can, alternatively, be utilized to collect highly reliable data to investigate between-individual phenomena. A number of decisions need to be made when designing an ESM study. Study duration and intensity (that is, total days of measurement and total assessments per day) represent a tradeoff between data richness and participant fatigue that needs to be carefully weighed. Other scheduling options need to be considered, such as triggered versus scheduled surveys. Researchers also need to be aware of the generally high potential cost of this approach, as well as the monetary and nonmonetary resources required. The intensity of this method also requires special consideration of the sample and the context. Proper screening is invaluable; ensuring that participants and their context is applicable and appropriate to the design is an important first step. The next step is ensuring that the surveys are planned in a compatible way to the sample, and that the surveys are designed to appropriately and rigorously collect data that can be used to accomplish the aims of the study at hand. Furthermore, ESM data typically requires proper consideration in regards to how the data will be analyzed and how results will be interpreted. Proper attention to analytic approaches (typically multilevel) is required. Finally, when interpreting results from ESM data, one must not forget that these effects typically represent processes that occur continuously across individuals’ working lives—effect sizes thus need to be considered with this in mind.

Article

Alexander Bolinger and Mark Bolinger

There is currently great enthusiasm for entrepreneurship education and the economic benefits that entrepreneurial activity can generate for individuals, organizations, and communities. Beyond economic outcomes, however, there is a variety of social and emotional costs and benefits of engaging in entrepreneurship that may not be evident to students nor emphasized in entrepreneurship courses. The socioemotional costs of entrepreneurship are consequential: on the one hand, entrepreneurs who pour their time and energy into new ventures can incur costs (e.g., ruptured personal and professional relationships, decreased life satisfaction and well-being, or strong negative reactions such as grief) that can often be as or more personally disruptive and enduring than economic costs. On the other hand, the social and emotional benefits of an entrepreneurial lifestyle are often cited as intrinsically satisfying and as primary motivations for initiating and sustaining entrepreneurial activity. The socioemotional aspects of entrepreneurship are often poorly understood by students, but highlighting these hidden dimensions of entrepreneurial activity can inform their understanding and actions as prospective entrepreneurs. For instance, entrepreneurial passion, the experience of positive emotions as a function of engaging in activities that fulfill one’s entrepreneurial identity, and social capital, whereby entrepreneurs build meaningful relationships with co-owners, customers, suppliers, and other stakeholders, are two specific socioemotional benefits of entrepreneurship. There are also several potential socioemotional costs of entrepreneurial activity. For instance, entrepreneurship can involve negative emotional responses such as grief and lost identity from failure. Even when an entrepreneur does not fail, the stress of entrepreneurial activity can lead to sleep deprivation and disruptions to both personal and professional connections. Then, entrepreneurs can identify so closely and feel so invested that they experience counterproductive forms of obsessive passion that consume their identities and impair their well-being.