Giving Voice to Values (GVV) is a rehearsal and case-based approach to business ethics education that is designed to develop moral competence and that emphasizes self-assessment, peer coaching and prescriptive ethics. It is built on the premise that many businesspeople want to act on their values but lack the know-how and experience for doing so. The focus is on action rather than developing ethical awareness or analytical constructs for determining what is right and the epistemology behind knowing that it is right, while acknowledging that existing and well-established approaches to these questions are also important. The GVV rubric for acting on one’s values is based upon the following three questions: (1) What’s at stake? (2) What are the reasons and rationalizations you are trying to counter? and (3) What levers can be used to influence those who disagree? Taken together, the answers to these questions constitute a script for constructing a persuasive argument for effecting values-based change and an action plan for implementation. This approach is based on the idea, supported by research and experience, that pre-scripting and “rehearsal” can encourage action. GVV is meant to be complementary to traditional approaches to business ethics that focus on the methodology of moral judgment. GVV cases are post-decision-making in that they begin with a presumed right answer and students are invited to engage in the “GVV Thought Experiment,” answering the questions: “What if you were going to act on this values-based position? How could you be effective?” This implies a shift in focus towards values-based action in ways that recognize the pressures of the business world. As a consequence of this shift, GVV addresses fundamental questions about what, to whom, and how business ethics is taught. The answers to these questions have led to widespread adoption of GVV in business schools, universities, corporations, and beyond.
Daniel G. Arce and Mary C. Gentile
Jason Kautz, M. Audrey Korsgaard, and Sophia So Young Jeong
Organizations and their agents regularly face ethical challenges as the interests of various constituents compete and conflict. The theory of other-orientation provides a useful framework for understanding how other concerns and modes of reasoning combined to produce different mindsets for approaching ethical challenges. To optimize outcomes across parties, individuals can engage in complex rational reasoning that addresses the interests of the self as well as others, a mindset referred to as collective rationality. But collective rationality is as difficult to sustain as it is cognitively taxing. Thus, individuals are apt to simplify their approach to complex conflicts of interest. One simplifying strategy is to reduce the relevant outcome set by focusing on self-interests to the neglect of other-interest. This approach, referred to as a rational self-interest mindset, is self-serving and can lead to actions that are deemed unethical. At the other extreme, individuals can abandon rational judgment in favor of choices based on heuristics, such as moral values that specify a given mode of prosocial behavior. Because this mindset, referred to as other-oriented, obviates consideration of outcome for the self and other, it can result in choices that harm the self as well as other possible organizational stakeholders. This raises the question: how does one maintain an other-interested focus while engaging in rational reasoning? The resolution of this question rests in the arousal of moral emotions. Moral emotions signal to the individual the opportunity to express, or the need to uphold, moral values. Given that moral values direct behavior that benefits others or society, they offset the tendency to focus on self-interest. At extreme levels of arousal, however, moral emotions may overwhelm cognitive resources and thus influence individuals to engage in heuristic rather than rational reasoning. The effect of moral emotions is bounded by attendant emotions, as individuals are likely to experience multiple hedonic and moral emotions in the same situation. Deontic justice predicts that the arousal of moral emotions will lead individuals to retaliate in response to injustice, regardless of whether they experience personal benefit. However, evidence suggests that individuals may instead engage in self-protecting behavior, such as withdrawal, or self-serving behaviors, such as the contagion of unjust behavior. These alternative responses may be due to strong hedonic emotions, such as fear or schadenfreude, the pleasure derived from others’ misfortunes, overpowering one’s moral emotions. Future research regarding the arousal levels of moral emotions and the complex interplay of emotions in the decision-making process may provide beneficial insight into managing the competing interests of organizational stakeholders.
Andy El-Zayaty and Russell Coff
Many discussions of the creation and appropriation of value stop at the firm level. Imperfections in the market allow for a firm to gain competitive advantage, thereby appropriating rents from the market. What has often been overlooked is the continued process of appropriation within firms by parties ranging from shareholders to managers to employees. Porter’s “five forces” model and the resource-based view of the firm laid out the determinants of value creation at the firm level, but it was left to others to explore the onward distribution of that value. Many strategic management and strategic human capital scholars have explored the manner in which employees and managers use their bargaining power vis-à-vis the firm to appropriate value—sometimes in a manner that may not align with the interests of shareholders. In addition, cooperative game theorists provided unique insights into the way in which parties divide firm surplus among each other. Ultimately, the creation of value is merely the beginning of a complex, multiparty process of bargaining and competition for the rights to claim rents.
Kathryn Rudie Harrigan
Concerns regarding strategic flexibility arose from companies’ need to survive excess capacity and flagging sales in the face of previously unforeseen competitive conditions. Strategic flexibility became an organizational mandate for coping with changing competitive conditions and managers learned to plan for inevitable restructurings. They learned to reposition assets and capabilities to suit their firms’ new strategic aspirations by overcoming barriers to change. Core rigidities flared up in the form of legacy costs, regulatory constraints, political animosity, and social resistance to adjusting firms’ strategic postures; managers learned that their firms’ past strategic choices could later become barriers to adapting corporate strategy. Managerial insights concerning how to modify firms’ resources changed the way in which they were subsequently regarded. Enterprises saw assets lose their relative productivity and value as mastery of specific knowledge become less germane to success. Managers recognized that their firms’ capabilities were mismatched to market or value-chain relationships. They struggled to adapt by overcoming barriers to change. Flexibility problems were inevitable. Even if competitive conditions were not impacted by exogenous change forces, sustaining advantage in a steady-state competitive arena became difficult; sustaining advantage in dynamic arenas became nearly impossible. Confronted with the difficulties of changing strategic postures, market orientations, and overall cost competitiveness, managers embraced the need to combat organizational rigidity in all aspects of their firms’ operations. Strategic flexibility affected enterprise assets, capabilities, and potential relationships with other parties within firms’ value-creating ecosystems; the need for strategic flexibility influenced investment choices made to escape organizational rigidity, capability traps and other forms of previously unrecognized resource inflexibility. Where entry barriers once protected a firm’s strategic posture, flexibility issues arose when the need for endogenous changes occurred. The temporary protection afforded by imitation barriers slowed an organization’s responsiveness to changing its strategy imperatives—making the firm rigid when adaptiveness was needed instead. A firm’s own inertia to change sometimes created mobility barriers that had to be overcome when hypercompetitive conditions arose in their traditional market arenas and forced firms to change how they competed. Where exogenous changes drove competitive conditions to become more volatile, attainment of strategic flexibility mandated the need to downsize the scope of a firm’s activities, shut down facilities, prune product lines, reduce headcount, and eliminate redundancies—as typically occurred during an organizational turnaround—while simultaneously increasing the scope of external activities performed by an enterprise’s value-adding network of suppliers, distributors, value-added resellers, complementors, and alliance partners, among others. Such structural value-chain changes typically exacerbated pressures on the firm’s internal organization to search more broadly for value-adding innovations to renew products and processes to keep up with the accelerated pace of industry change. Exploratory processes of self-renewal forced confrontations with mobility or exit barriers that were long tolerated by firms in order to avoid coping with the painful process of their ultimate elimination. The sometimes surprising efforts by firms to avoid inflexibility included changes in the nature of firms’ asset investments, value-chain relationships, and human-resource practices. Strategic flexibility concerns often trumped the traditional strengths accorded to resource-based strategies.
Alex Murdock and Stephen Barber
What is the state of what can be described as management in the third sector? At its heart, it discusses the long-held assertion that these organizations are reluctant to accept the need for ‘management’. After all, what makes third sector organizations different, by their own estimation, from their commercial equivalents is the deeply embedded concepts of mission and values together with a distinctly complex stakeholder environment. For all that, there are also “commercial” pressures and an instinct for survival. To serve the mission necessarily needs resources. And there is a perennial tension in high-level decision-making between delivery of the mission and maintaining solvency. Third-sector organizations, like any other, are innately concerned with their own sustainability. It is here that the analysis is located and there is an opportunity to examine the topic theoretically and empirically. By introducing the concept of the “Management See-Saw” to illustrate the competing drivers of values and commercialism before exploring these identified pressures through the lens of three real-life vignettes, it is possible to appreciate the current state of play. Given all this, it is important for modern organizations to be able to measure value and impact. From a managerial perspective, the reality needs to be acknowledged that this environment is complex and multi-layered. In drawing the strands together, the discussion concludes by illustrating the importance of leadership in the sector, which is a powerful indicator of effectiveness. Nevertheless, with a focus on management, the core contention is that management remains undervalued in the third sector. That said, commercial focus can increasingly be identified and the longer term trend is squarely in this direction.
Rand R. Wilcox
Hypothesis testing is an approach to statistical inference that is routinely taught and used. It is based on a simple idea: develop some relevant speculation about the population of individuals or things under study and determine whether data provide reasonably strong empirical evidence that the hypothesis is wrong. Consider, for example, two approaches to advertising a product. A study might be conducted to determine whether it is reasonable to assume that both approaches are equally effective. A Type I error is rejecting this speculation when in fact it is true. A Type II error is failing to reject when the speculation is false. A common practice is to test hypotheses with the type I error probability set to 0.05 and to declare that there is a statistically significant result if the hypothesis is rejected. There are various concerns about, limitations to, and criticisms of this approach. One criticism is the use of the term significant. Consider the goal of comparing the means of two populations of individuals. Saying that a result is significant suggests that the difference between the means is large and important. But in the context of hypothesis testing it merely means that there is empirical evidence that the means are not equal. Situations can and do arise where a result is declared significant, but the difference between the means is trivial and unimportant. Indeed, the goal of testing the hypothesis that two means are equal has been criticized based on the argument that surely the means differ at some decimal place. A simple way of dealing with this issue is to reformulate the goal. Rather than testing for equality, determine whether it is reasonable to make a decision about which group has the larger mean. The components of hypothesis-testing techniques can be used to address this issue with the understanding that the goal of testing some hypothesis has been replaced by the goal of determining whether a decision can be made about which group has the larger mean. Another aspect of hypothesis testing that has seen considerable criticism is the notion of a p-value. Suppose some hypothesis is rejected with the Type I error probability set to 0.05. This leaves open the issue of whether the hypothesis would be rejected with Type I error probability set to 0.025 or 0.01. A p-value is the smallest Type I error probability for which the hypothesis is rejected. When comparing means, a p-value reflects the strength of the empirical evidence that a decision can be made about which has the larger mean. A concern about p-values is that they are often misinterpreted. For example, a small p-value does not necessarily mean that a large or important difference exists. Another common mistake is to conclude that if the p-value is close to zero, there is a high probability of rejecting the hypothesis again if the study is replicated. The probability of rejecting again is a function of the extent that the hypothesis is not true, among other things. Because a p-value does not directly reflect the extent the hypothesis is false, it does not provide a good indication of whether a second study will provide evidence to reject it. Confidence intervals are closely related to hypothesis-testing methods. Basically, they are intervals that contain unknown quantities with some specified probability. For example, a goal might be to compute an interval that contains the difference between two population means with probability 0.95. Confidence intervals can be used to determine whether some hypothesis should be rejected. Clearly, confidence intervals provide useful information not provided by testing hypotheses and computing a p-value. But an argument for a p-value is that it provides a perspective on the strength of the empirical evidence that a decision can be made about the relative magnitude of the parameters of interest. For example, to what extent is it reasonable to decide whether the first of two groups has the larger mean? Even if a compelling argument can be made that p-values should be completely abandoned in favor of confidence intervals, there are situations where p-values provide a convenient way of developing reasonably accurate confidence intervals. Another argument against p-values is that because they are misinterpreted by some, they should not be used. But if this argument is accepted, it follows that confidence intervals should be abandoned because they are often misinterpreted as well. Classic hypothesis-testing methods for comparing means and studying associations assume sampling is from a normal distribution. A fundamental issue is whether nonnormality can be a source of practical concern. Based on hundreds of papers published during the last 50 years, the answer is an unequivocal Yes. Granted, there are situations where nonnormality is not a practical concern, but nonnormality can have a substantial negative impact on both Type I and Type II errors. Fortunately, there is a vast literature describing how to deal with known concerns. Results based solely on some hypothesis-testing approach have clear implications about methods aimed at computing confidence intervals. Nonnormal distributions that tend to generate outliers are one source for concern. There are effective methods for dealing with outliers, but technically sound techniques are not obvious based on standard training. Skewed distributions are another concern. The combination of what are called bootstrap methods and robust estimators provides techniques that are particularly effective for dealing with nonnormality and outliers. Classic methods for comparing means and studying associations also assume homoscedasticity. When comparing means, this means that groups are assumed to have the same amount of variance even when the means of the groups differ. Violating this assumption can have serious negative consequences in terms of both Type I and Type II errors, particularly when the normality assumption is violated as well. There is vast literature describing how to deal with this issue in a technically sound manner.
The Swiss watch industry has enjoyed uncontested domination of the global market for more than two decades. Despite high costs and high wages, Switzerland is the home of most of the largest companies in this industry. Scholars in business history, economics, management studies, and other social sciences focused on four major issues to explain such success. The first is product innovation, which has been viewed as one of the key determinants of competitiveness in the watch industry. Considerable attention has been focused on the development of electronic watches during the 1970s, as well as the emergence of new players in Japan and Hong Kong. Yet the rebirth of mechanical watches during the early 1990s as luxury accessories also can be characterized as a product innovation (in this case, linked to marketing strategy rather than pure technological innovation). Second, brand management has been a key instrument in changing the identity of Swiss watches, repositioning them as a luxury business. Various strategies have been adopted since the early 1990s to add value to brands by using culture as a marketing resource. Third, the evolution of the industry’s structure emphasizes a deep transformation during the 1980s, characterized by a shift from classical industrial districts to multinational enterprises. Concentration in Switzerland, as well as the relocation abroad of some production units through foreign direct investment (FDI) and independent suppliers, have enabled Swiss watch companies to control manufacturing costs and regain competitiveness against Japanese firms.Fourth, studying the institutional framework of the Swiss watch industry helps to explain why this activity was not fully relocated abroad, unlike most sectors in low-tech industries. The cartel that was in force from the 1920s to the early 1960s, and then the Swiss Made law of 1971, are two major institutions that shaped the watch industry.
Organizations (whether they are permanent or temporary) have stakeholders, that is, individuals and groups that can affect or be affected by the organization’s activities and achievements. Assuming that the fundamental driver of value creation is stakeholder relationships, managing those relationships well is a prerequisite for obtaining and sustaining success in all businesses, regardless of the success measures applied. Therefore, applying a stakeholder perspective is of significant importance for any manager or entrepreneur. However, the essentials as well as the implications of applying such a perspective are not clear. Researchers and practitioners have offered many contributions, however, the existing literature is inconclusive. To provide clarity, stakeholder concepts (e.g. stakeholder definition, systems perspective, separation thesis, stakeholder analysis, stakeholder engagement, perception of fairness, stakeholder utility function, stakeholder salience, stakeholder disaggregation, stakeholder multiplicity, managing for stakeholders, Value Creation Stakeholder Theory, value destruction, shadows of the context) are defined and 15 propositions for further inquiry are offered. The Scandinavian and American origins of stakeholder thinking are presented. The propositions are intended to invite discussion—and could form the basis for future research questions as well as provide guidance for managers. By drawing on (a) Professor Eric Rhenman, who in the 1960s first proposed an explicit theoretical framework on stakeholder thinking; (b) Professor R. Edward Freeman, who has been the most influential contributor to the field; and (c) additional, selected contributions, the aim is to providevalue for both new and seasoned researchers as well as for managers, consultants, and educators. In order to give the reader the opportunity to self-assess and interpret the “raw data,” the text is rich on citations.