Values, Other-Interest, and Ethical Behavior: The Critical Role of Moral Emotions
Summary and Keywords
Organizations and their agents regularly face ethical challenges as the interests of various constituents compete and conflict. The theory of other-orientation provides a useful framework for understanding how other concerns and modes of reasoning combined to produce different mindsets for approaching ethical challenges. To optimize outcomes across parties, individuals can engage in complex rational reasoning that addresses the interests of the self as well as others, a mindset referred to as collective rationality. But collective rationality is as difficult to sustain as it is cognitively taxing. Thus, individuals are apt to simplify their approach to complex conflicts of interest. One simplifying strategy is to reduce the relevant outcome set by focusing on self-interests to the neglect of other-interest. This approach, referred to as a rational self-interest mindset, is self-serving and can lead to actions that are deemed unethical. At the other extreme, individuals can abandon rational judgment in favor of choices based on heuristics, such as moral values that specify a given mode of prosocial behavior. Because this mindset, referred to as other-oriented, obviates consideration of outcome for the self and other, it can result in choices that harm the self as well as other possible organizational stakeholders. This raises the question: how does one maintain an other-interested focus while engaging in rational reasoning? The resolution of this question rests in the arousal of moral emotions. Moral emotions signal to the individual the opportunity to express, or the need to uphold, moral values. Given that moral values direct behavior that benefits others or society, they offset the tendency to focus on self-interest. At extreme levels of arousal, however, moral emotions may overwhelm cognitive resources and thus influence individuals to engage in heuristic rather than rational reasoning. The effect of moral emotions is bounded by attendant emotions, as individuals are likely to experience multiple hedonic and moral emotions in the same situation. Deontic justice predicts that the arousal of moral emotions will lead individuals to retaliate in response to injustice, regardless of whether they experience personal benefit. However, evidence suggests that individuals may instead engage in self-protecting behavior, such as withdrawal, or self-serving behaviors, such as the contagion of unjust behavior. These alternative responses may be due to strong hedonic emotions, such as fear or schadenfreude, the pleasure derived from others’ misfortunes, overpowering one’s moral emotions. Future research regarding the arousal levels of moral emotions and the complex interplay of emotions in the decision-making process may provide beneficial insight into managing the competing interests of organizational stakeholders.
A core challenge in organizations is balancing the competing interests of the various stakeholders (Jensen, 2002). Optimizing across competing interests is difficult, and failure to do so may lead to actions that are unethical or harmful to one or more stakeholders. Balancing a wide array of competing interests requires collective rationality (de Dreu, 2006; Meglino & Korsgaard, 2006), wherein the potential consequences for all parties are considered (Colman, Pulford, & Rose, 2008; de Dreu, 2006; Meglino & Korsgaard, 2006; van Lange, 1999). Collective rationality involves identifying options that optimize the interests for all parties (de Dreu, 2006; van Lange, 1999). Collective rationality requires other-interest—that is, respect and concern for the interests of other stakeholders—and rational reasoning—that is, a controlled and consequentialist mode of judgment (Meglino & Korsgaard, 2006).
Maintaining both rational reasoning and other-interest is challenging. Rational reasoning is cognitively taxing, as it involves the probabilistic consideration of potential consequences of actions (Evans, 2003). In a collective rationality mindset, the decision-maker’s rational deliberations involve a wider array of potential consequences not only for the self but also for other stakeholders. Potential conflicts of interest between stakeholders only add to the complexity of relational reasoning. To manage this complexity, individuals are likely to employ simplifying strategies (Meglino & Korsgaard, 2006). One simplifying strategy is to focus on self-interests, discounting or even excluding the interests of others. In such cases, the individual makes decisions from a rational self-interest mindset (Meglino & Korsgaard, 2004), which can lead to self-serving and potentially unethical behavior (Molinsky, Grant, & Margolis, 2012; Wang, Zhong, & Murnighan, 2014). On the other hand, individuals may simplify the problem by focusing on prevailing moral principles (e.g., benevolence or fairness) without the consideration of costs and benefits. This other-oriented mindset may lead to prosocial behavior that is not aligned with self-interests, potentially harming the individual’s productivity (Flynn, 2003) and well-being (Halbesleben, Harvey, & Bolino, 2009). As well, failure to consider the costs and benefits while upholding moral principles can lead to inefficient use of resources for the collective good (De Hooge, Nelissen, Breugelmans, & Zeelenberg, 2011; Loewenstein & Small, 2007). In short, depending on the underlying mindset, attempts to optimize collective interests may, at best, be inefficient or, at worst, harmful to the self and the organization. Therefore, we are left with a conundrum: how does one maintain other-interest while engaging in rational reasoning?
The answer may lie in emotions—specifically, moral emotions. Moral emotions are “emotions that go beyond the direct interests of the self” (Haidt, 2003, p. 853) and carry action tendencies that promote the welfare of others (Haidt & Kesebir, 2010). We propose that when engaging in rational reasoning, the arousal of moral emotions acts as a counterbalance to self-interested tendencies and prompts individuals to consider the value of outcomes to others.
The purpose of this article is to explore the interplay between other-interest, rational reasoning, and moral emotions. We draw on the theory of other-orientation (Meglino & Korsgaard, 2004, 2006) to examine the interplay between other-interest and rational reasoning. We use the theory to suggest that the coexistence of other-interest and rational reasoning, despite offering the greatest chance of optimal solutions to conflicts of interest, is difficult to sustain. We then draw from the literature on moral emotions to propose that the arousal of moral emotions and the subsequent activation of moral values is a determining factor in the actor’s mindset. Specifically, the arousal of moral emotions may foster collective rationality, offsetting the tendency to engage in self-serving behavior. However, high levels of moral emotion arousal can trigger a shift to a more heuristic mode of reasoning.
We first review the theory of other-orientation, which specifies that choice processes derive from two dimensions: mode of reasoning (rational and heuristic reasoning) and motivational orientation (self- and other-interest). Mode of reasoning and motivational orientation combine to produce four mindsets—mindlessness, other-orientation, rational self-interest, and collective rationality—which have different implications for the joint optimization of interests. We then explore the instability of collective rationality and the consequences for prosocial behavior. We integrate theory and research on moral emotions (Haidt, 2003) with the theory of other-orientation in order to demonstrate how and why individuals drift among the four moral mindsets. We conclude this article with a discussion of the various consequences of the interplay between moral emotions and collective rationality, offering some supporting evidence and suggestions for further research.
Theory of Other-Orientation
The theory of other-orientation suggests that the mindset of an individual shapes why and to what extent the individual engages in prosocial behavior (Meglino & Korsgaard, 2004, 2006). The mindsets in the theory of other-orientation are a function of mode of reasoning (rational vs. heuristic) and motivational orientation (self- vs. other-concern). Mode of reasoning refers to the degree of conscious evaluation of the potential outcomes of a given course of action—that is, the weighing of potential costs and benefits—with the intent of choosing the course of action offering the greatest net benefit. Mode of reasoning can be separated based on the degree of control exhibited in processing information (Chen, Shechter, & Chaiken, 1996). Motivational orientation refers to an emphasis on self-focused versus other-focused goals. Once a goal is activated, the individual focuses on goal-relevant information and potential courses of action to the exclusion of other information and actions. Thus, individuals who are self-focused are apt to be less mindful of the impact of their actions on others’ interests, and individuals who are other-focused are apt to place less weight on the impact of their actions on their own interests.
Mode of Reasoning
Dual processing theories propose that the mind has two levels of reasoning (for a review, see Evans, 2008). One type of reasoning is quick, intuitive, and automatic and the other is slow, deliberative, and consciously controlled. There are several terms from a wide array of literatures within the fields of cognitive psychology and decision sciences that reference this cognitive phenomena: system 1 and system 2 (Kahneman & Frederick, 2002), type 1 and type 2 (Evans & Stanovich, 2013), heuristic and systemic (Chen & Chaiken, 1999), or impulsive and reflective (Strack & Deustch, 2004). Meglino and Korsgaard (2004, 2006) use the terms “rational” and “heuristic” to denote the dual processes of cognition within the theory of other-orientation.
Rational reasoning is conscious, deliberative, and resource intensive. Under rational reasoning, individuals weigh alternative outcomes in light of values and beliefs in order to choose the course of action with the greatest net benefit. Conversely, heuristic reasoning is unconscious, quick, and cognitively easy. Under heuristic reasoning, individuals apply rules, norms, and routines to quickly execute a course of action. That is, heuristic reasoning dispenses with the weighing of alternatives in favor of following an overarching principle. The rules and heuristics that an individual utilizes under heuristic reasoning are developed either through repeated experience that requires controlled, conscious deliberation (Chaiken & Trope, 1999) or through implicit learning (Sun, Slusarz, & Terry, 2005). Either rational or heuristic reasoning may be functional in that both processes can produce the best outcome given the actor’s goals and the options available (Evans & Stanovich, 2013). However, because rational and heuristic reasoning influence the information to which one attends and how one uses the information in the decision-making process, the two processes can promote different choices in a given situation.
Given that rational processing involves more complex and effortful cognitive operations relative to heuristic processing, individuals may be limited in their ability or inclination to extend effort toward rational reasoning. In such cases, individuals revert to heuristic processing. Limitations to the ability to engage in rational processing may arise from individual differences in cognitive capacity or contextual factors, such as time pressure and distractions, that create temporary cognitive constraints (Reynolds, 2006; West & Stanovich, 2003). Similarly, limitations in cognitive effort may arise from individual differences in thinking style (e.g., need for cognition; Cacioppo & Petty, 1982) or from context factors, such as accountability (Kruglanski, Pierro, Mannetti, & De Grada, 2006), that foster or inhibit motivation to expend cognitive effort.
Motivated behavior is directed at achieving some end state or goal. The theory of other-orientation suggests that motivational orientations are separated into two broad categories: those motivations that direct an individual toward self-interested goals and those motivations that direct an individual toward goals benefiting others or the collective good. Self-interested motivation encompasses those interests that preserve the psychological and material well-being that underpin most need-based theories of motivation (e.g., Gagné & Deci, 2005; McClelland, 1988), whereas other-interested motivation involves addressing the needs of others and the collective.
The original formulation of the theory of other-interest focused on altruism as an alternative to self-interest. However, a number of other prosocial principles, such as fairness (Cropanzano, Goldman, & Folger, 2003), also direct attention toward goals that protect a broader social good. Scholars have speculated that these principles may have their roots in the evolution of certain human characteristics that lead individuals to attend to broader social goods (e.g., Brewer, 1999; Cosmides & Tooby, 1989; Folger & Skarlicki, 2008). While such characteristics are theorized to be universal traits of the human species, group and individual differences are liable to emerge through socialization (Cosmides & Tooby, 1989; Meglino & Korsgaard, 2004, 2006). As a result, other-interest can be activated by situational factors that make the implications for others or the community more salient (Aquino & Reed, 2002). At the same time, individual differences in value priorities (Ravlin & Meglino, 1989) are liable to arouse strong other-interest in some individuals more than in others.
The self-interest and other-interest motivational systems are considered independent, but the outcome structure endemic in many choice contexts often leads one system to dominate (Meglino & Korsgaard, 2006). Situational factors such as loss-framing (Brewer & Kramer, 1986), salience of group membership (de Dreu, Weingart, & Kown, 2000), and individual-based reward systems (Pfeffer & Sutton, 2006) influence the relative salience of self- and other-interests. Mere priming can lead to differences in self-interest that have consequences for cognition and behavior. De Dreu and colleagues (e.g., de Dreu, 2007; de Dreu, Beersma, Stroebe, & Euwema, 2006; de Dreu & Nauta, 2009) found that priming self-interest led individuals to focus on self-serving information such as personal needs and desires, personal inputs, and personal outcomes. Additionally, certain identities aligned with self- or other-interest can be made salient or may be chronically salient, depending on the strength of cultural and individual values. For instance, an interdependent self-construal, which varies both within and between individuals (Gardner, Gabriel, & Lee, 1999), is liable to lead to more cooperative (de Dreu & Nauta, 2009) and ethical behavior (Cojuharenco, Shteynberg, Gelfand, & Schminke, 2012; Hoyt & Price, 2015). Conversely, self-interested states and traits such as independent self-construal and achievement striving are associated with less ethical and more selfish behavior (Aquino, Freeman, Reed, Lim, & Felps, 2009).
The preceding discussion suggests that moral values play a critical role in the activation of other-interest. Further, moral values and other-interest can be activated by contextual factors; individuals who strongly endorse such values are likely to react more strongly. In their conceptualization of other-interested motivation, Meglino and Korsgaard (2004, 2006) focused on the value of benevolence. However, given that moral values address the goodness of behavior in relation to others (Scott, 2000), the system of other-interested motivation broadly encompasses moral values in general. These values and their underlying mechanisms are discussed in the section “Other-Interest and Moral Values.”
Other-Interest and Moral Values
Schwartz (1994) argued that values serve three universal requirements of human existence: biological welfare of the individual, social order, and the welfare of groups. Moral values are the subset of those values that function to regulate relations between individuals and within social entities (Schwartz, 2005), address social order and the welfare of groups, and pertain to issues beyond immediate self-interest. As such, moral values are fundamentally important to regulating and evaluating the behavior of individuals relative to the needs of the larger social entities (Haidt & Kesebir, 2010). Three core moral values pertinent to work are fairness, benevolence, and integrity (Ravlin & Meglino, 1989).
Schwartz (2012) defines benevolence as “serving and enhancing the welfare of those with whom one is in frequent contact” (p. 5). Similarly, the taxonomy of work values (Ravlin & Meglino, 1987, 1989) includes “concern for others,” which refers to the degree to which an individual is concerned with the welfare of others. The value of benevolence is predictive of behavior directed at helping or benefiting others. The activation of benevolence can lead to such behavior even when the action is anonymous and there is no expectation of a return benefit (Korsgaard, Meglino, Lester, & Jeong, 2010). Fairness refers to the adherence to standards of treatment and allocation of resources when making decisions that affect others. Fairness as a moral value relates to behavior aimed at restoring or maintaining justice. Concerns for fairness may be driven by either self-interested or deontic mechanisms (Cropanzano et al., 2003). The self-interest explanation suggests fairness signals that individuals’ self-interests will be met in the long term and within the constraints of the collective good. The deontic mechanism indicates that individuals adhere to principles of justice without having any personal stake in the situation and will even pay to anonymously restore justice (Turillo, Folger, Lavelle, Umphress, & Gee, 2002). Integrity as a value is defined as adhering to prevailing societal standards of conduct (e.g., Hosmer, 1995; Mayer, Davis, & Schoorman, 1995) and as consistency between words and deeds (Simons, 2002). The value of integrity is manifested primarily in actions such as honesty and abiding by prevailing norms, rules, or laws. As a strongly held moral value, integrity can lead individuals to make personally costly decisions in order to uphold the value (Gibson, Tanner, & Wagner, 2013; Xu & Ma, 2016).
Theory suggests that moral values guide behavior through rational deliberative processing or heuristic processing (Greene, 2009). Like any value, moral values define the value of anticipated outcomes under rational reasoning and guide individuals toward certain choices that benefit the self. According to this logic, upholding moral values creates a positive affective state and failing to do so creates a negative affective state. As predicted by expectancy theories (van Eerde & Thierry, 1996), individuals will make choices based on the anticipated affect associated with expressing or upholding their moral values. Alternatively, strongly held moral values can be governed by a more heuristic process that shortcuts rational deliberation. Research on protected values suggests that when faced with a choice involving strongly held moral values, people engage in deontological reasoning, acting on principle while displaying relative insensitivity to consequences (Baron & Ritov, 2009). For example, individuals facing choices involving their protected values view outcomes as less harmful if they occur through inaction rather than action. Similarly, individuals are judged more harshly for unsuccessful attempted harm than actual unintended harms (Young, Cushman, Hauser, & Saxe, 2007).
A Typology of Mindsets
Mode of reasoning and motivational orientation combine with each other to form a typology of four potential mindsets, as depicted in Figure 1. These mindsets represent the manner in which individuals may approach decision-making. Additionally, while individuals may exhibit a proclivity toward one specific mindset, these mindsets are not stable traits. There is natural drift among mindsets depending on the confluence of individual differences and situational factors.
Mindlessness describes a mindset in which individuals pursue self-interested goals with little conscious consideration of the consequences and without attempting to maximize benefit to the self. Behaviors that are hedonistic and impulsive fall into this category, as well as behaviors that are habitual or routinized (Dawes, 1988). Mindlessness can be functional in that it aligns with the actor’s self-interests and achieves the intended outcome when individuals have limited cognitive resources. For example, high-stress contexts can limit cognitive resources and the time needed to engage in complex problem-solving. In such circumstances, an overlearned routine can be rapidly and automatically executed (Driskell & Johnson, 1998). Such routinized behaviors may also be dysfunctional if the problem has been misconstrued or when various cognitive biases create blind spots to the personal costs of one’s actions. Research on mindless eating, for example, indicates that individuals eat significantly more when the food is presented in a large bowl, even when the food was unpalatable (Wansink & Sobal, 2007). Such behavior not only conflicts with the long-term interests of the individual but with the immediate hedonic experience as well.
To the extent that the salient outcomes align with the consequences for others, mindless behavior is not apt to involve ethical violations or harm to others. Moreover, in some cases, mindless behavior may be prosocial. Certain prosocial acts can create or maintain a positive affective state for an individual. Thus, persons are more likely to engage in helping behavior when they are in a positive mood (George, 1991). Further, the expression of important values can be a personally satisfying experience and can be associated with increased positive emotions (Aknin, Dunn, Sandstrom, & Norton, 2013) or “warm glow” (Andreoni, 1990).
Rational self-interest describes individuals who consciously pursue self-serving goals with the intent to maximize personal outcomes. Normative theories of decision-making posit that this mindset produces the greatest benefit to the decision-maker. However, as the field of behavioral decision theory has demonstrated, there are numerous individual and contextual factors that render this process dysfunctional. Rational self-interest is often bounded by cognitive limitations of the individual (e.g., intelligence) and the quality of information available regarding the problem, both of which can lead to flawed interferences and suboptimal choices (Stanovich & West, 2000). Moreover, problem features and framing can influence risk valuation and outcome preferences, leading to choices that under other circumstances would be deemed suboptimal (Kahneman & Tversky, 1984).
The rational self-interested actor is assumed to be relatively indifferent to the outcomes for others, but this mindset may nonetheless lead to behavior that benefits the greater good if the outcomes for the moral act are aligned with self-interests. For example, a firm may endorse certain sustainable practices because doing so is cost-effective. Further, rational self-interest may produce prosocial behavior if such behavior is instrumental to the individual’s personal agenda, as is the case when individuals engage in organizational citizenship behavior to promote a positive self-image and to build social capital (Bolino, Kacmar, Turnley, & Gilstrap, 2008). When self-interests are not aligned with others in the situation, rational self-interest is less likely to produce ethical or prosocial behavior.
An other-oriented mindset involves a strong other-interest motivation and a reasoning process that is relatively uncontrolled. Individuals in an other-oriented mindset tend to rely on rules, norms, and principles regarding modes of behavior in lieu of a careful consideration of the consequences of various courses of action. That is, persons in this mindset pursue goals intended to benefit others without a careful weighing of the consequences to themselves or others. Behavior is likely to result from reflexive conformity to societal norms (Cain, Dana, & Newman, 2014). Factors that focus attention on certain norms or the needs of others are likely to determine the direction of behavior. This mindset has the potential to be dysfunctional for the individual, at least in the short term. Because they are less likely to weigh personal costs and benefits, individuals in an other-oriented mindset may choose personally costly actions that they otherwise would not consider. However, there are circumstances where an other-oriented mindset may produce more functional outcomes for the individual than other mindsets. Specifically, other-orientation can be superior to mindless decisions when the mindless choice would be opting for an immediate benefit while ignoring long-term costs. Consider the example of an employee engaged with an irate customer. A mindless response would be to respond in kind, which would escalate the conflict, potentially resulting in a negative performance event for the employee. In contrast, abiding by the “golden rule”—resisting the urge to respond in a hostile manner and showing kindness instead—may allow for an effective resolution of the problem.
Collective rationality describes individuals who pursue other-interested goals while also engaging in rational reasoning (Colman et al., 2008; de Dreu, 2006; van Lange, 1999). This mindset resembles the classic utilitarian model of ethical decision-making wherein the best choice is defined as the one that produces the greatest good for the greatest number. Individuals in this mindset incorporate the consideration of outcomes for others, attaching value to those outcomes, and are willing to make trade-offs between maximizing their self-interests and optimizing the interests of all concerned. This mode of reasoning is similar to rational self-interest and likely involves the same neurobiological processes as deliberations over non-social problems (Ruff & Fehr, 2014). The principal difference is that collective rationality involves the consideration of a greater array of potential outcomes, encompassing social entities which are relevant to both the self and others. In situations where there exists a conflict of interests between the self and others, such as a social dilemma, collective rationality can produce better outcomes for the individual than rational self-interest. In such cases, pursuing an individual-maximizing strategy without regard to the other’s goals and actions is likely to lead to a suboptimal outcome whereas collective rationality is liable to optimize outcomes across persons (for an overview, see Dawes, 1980).
The collective rationality mindset is illustrated in the integrative model of social value orientation (van Lange, 1999, 2008), which examines why people cooperate in situations that involve a conflict of interests (i.e., social dilemmas). In such contexts, the option that produces the maximum expected outcome for the self adversely affects the other party. The integrative model of social value orientation posits that individuals who cooperate (i.e., act prosocially) in such cases do so because their conceptualization of the relevant outcomes incorporates the interests of others. Individuals in a collective rationality mindset consider a complex array of outcomes—including their personal outcomes, the outcomes of others, and the relative difference in their outcomes—to achieve mutually beneficial results (Joireman, Kuhlman, van Lange, Doi, & Shelley, 2003).
The Tension Between Rational Reasoning and Other-Interest
As the preceding review suggests, a mindset characterized by the co-occurrence of rational reasoning and other-interest motivation results in behavior that is beneficial to both the individual and others. However, this mindset is difficult to achieve and maintain. On the one hand, rational reasoning is cognitively taxing, particularly when coupled with other-focused goals, as it requires consideration of a wider constellation of outcomes and more complex weighting schemes (van Lange, 1999). Individuals, as “cognitive misers” (Fiske & Taylor, 1991), may be unwilling or unable to engage in the necessary mental calculus. Meglino and Korsgaard (2006) speculate that given the complexity of issues involved in meeting individual and collective needs, individuals motivated by other-interest will tend to be more influenced by heuristics (e.g., a compromising rule). It is when focus falters that individuals may fall prey to biases and shortcuts that produce suboptimal outcomes. For example, de Dreu et al. (2006) found that when motivated to engage in thinking processes, individuals were more likely to pursue a win-win strategy and achieve higher outcomes for themselves. Those low in motivation were more likely to apply a compromise strategy, resulting in lower outcomes for the individuals and for their counterpart.
On the other hand, research has suggested that it is difficult to maintain other-focused motivation while engaging in rational reasoning. Rational reasoning involves the evaluation of costs and benefits, which are hedonic in nature, which makes self-interests more salient (Meglino & Korsgaard, 2004, 2006). As actors become more self-focused, they are apt to discount the welfare of others while reflecting on alternative courses of action (Kahneman, Knetsch, & Thaler, 1986). Consistent with this view, Wang et al. (2014) found that individuals who were induced to think in a rational, deliberative way were more selfish and thought less about others than those who were not put into this mode of processing. Similarly, Molinsky et al. (2012) found that inducing a rational deliberative mode of thinking led to less compassion toward others.
In short, rational reasoning coupled with other-interested motivation produces prosocial behavior in contexts that might otherwise produce self-serving decisions. Yet, simultaneously achieving both states is a challenge. To date, theories on collective rationality have emphasized the cognitive aspects of this challenge (Colman et al., 2008; Meglino & Korsgaard, 2006; van Lange, 1999) with little attention being paid to the arousal and maintenance of other-interested motivation. We propose that emotions—in particular, moral emotions—provide important insights into this issue. Moral emotions exhibit prosocial action tendencies that direct individuals to consider and value the outcomes for others while evaluating behavioral options. Thus, moral emotions are instrumental to maintaining high levels of other-interest motivation while engaging in rational reasoning.
Moral Emotions and the Arousal of Other-Interest
Emotions are discrete feelings of relatively short duration that are associated with specific events or stimuli (Frijda, 1993). Emotions act as a perceptual lens as they influence informational processing while individuals make sense of a given situation (Keltner, Ellsworth, & Edwards, 1993; Lerner & Keltner, 2001; Tiedens & Linton, 2001). Emotions can be broadly classified as hedonic or moral (Haidt, 2003; Tangney, Stuewig, & Mashek, 2007). Hedonic emotions are self-focused emotions that are primarily concerned with pleasure and pain. Moral emotions can be classified further into those that are other-focused—wherein one is reacting to the behavior or experience of others—and those that are self-conscious—wherein one is reacting to the implications of one’s own behavior or experiences for others (Eisenberg, 2000; Tangney et al., 2007). Moral emotions are associated with action tendencies that involve upholding moral principles (Haidt, 2003; Tangney et al., 2007). For example, moral emotions such as guilt (Ahn, Kim, & Aggarwal, 2014), empathy (Batson, Fultz, & Schoenrade, 1987; Eisenberg, 2000; Eisenberg & Miller, 1987), elevation (Schnall, Roper, & Fessler, 2010; van de Vyver & Abrams, 2015), and gratitude (Spence, Brown, Keeping, & Lian, 2014) are all associated with higher levels of prosocial behavior.
Emotions and Values
Values and emotions are intertwined in the decision-making process (Haidt, 2003), as both motivate toward a desired end state. While correlational evidence suggests that emotions and values are linked (e.g., Nelissen, Dijker, & de Vries, 2007), their exact relationship is complex. It has been proposed that emotions may act as a perceptual filter through which information is understood, highlighting different contextual factors (Izard, 1977, 1989; Lerner & Keltner, 2001). Therefore, emotions with strong ties to specific values (i.e., moral emotions) should be more likely to help individuals recognize opportunities to express those values. Testing the empathy-altruism hypothesis, Batson, O’Quin, Fultz, Vanderplas, and Isen (1983) found that participants were more likely to volunteer to take the place of an individual receiving an electrical shock (even when presented with the self-interested opportunity to leave the situation) when they were experiencing empathy versus non-empathic emotions. Similarly, Wheatley and Haidt (2005) found that participants who were induced to experience emotions of disgust at the exposure to a trigger word rated vignettes as more morally reprehensible, even when the vignettes described a mundane event. Conversely, values have also been demonstrated to elicit specific emotional responses. That is, emotions are the response to the application of values to the given context (Haidt, 2003). For example, Tamir et al. (2016) found that the more individuals endorsed a specific value (e.g., benevolence), the more they reported experiencing that empathic emotion.
Emotions and Decision-Making
The relationship between emotions and judgment is complex. It is theorized that emotions act to help individuals recognize positive/negative outcomes and can precede conscious recognition, such that individuals may respond before fully recognizing their actions (e.g., Bechara, Damasio, Tranel, & Damasio, 1997; van’t Wout, Kahn, & Sanfey, 2006). Haidt (2003) argued that moral emotions provide an intuitive understanding of a situation and limit an individual’s search for alternative choices. Additionally, emotional regulation is cognitively demanding and may constrain the ability for individuals to engage in rational reasoning (Sheppes, Scheibe, Suri, & Gross, 2011). On the other hand, rationalist models of moral judgment suggest that emotions may stimulate a degree of deliberative processing (e.g., Kohlberg, 1969; Turiel, 1983). Research by Bless (2000; Bless, Clore, Schwarz, Golisano, Rabe, & Wolk, 1996) and Schwarz (1990; Schwarz & Clore, 1983) suggests that the experience of negative affect, which signals a threat, triggers individuals to engage in more rational reasoning in order to identify this threat. Empirical evidence suggests that the nature of the specific emotion experienced may be important in determining emotion’s relationship with mode of reasoning. For example, sadness has been found to promote deliberative processing whereas anger has been found to promote heuristic processing (Bodenhausen, 1993; Lerner, Goldberg, & Tetlock, 1998).
Moral Emotions Helping and Hindering Collective Rationality
We propose that moral emotions help to promote other-interest and are not necessarily tied to a specific mode of reasoning. The arousal of moral emotions activates a corresponding moral value (Haidt, 2003) and the activation of moral values leads individuals to place greater emphasis on outcomes associated with upholding that value. Given that moral values articulate standards of good behavior in relation to others (Scott, 2000), moral values should motivate other-interest. For example, witnessing an injustice can arouse empathy, leading to the desire to restore justice for the victim and punishment of the perpetrator (Cropanzano, Massaro, & Becker, 2017). Similarly, empathy activates the value of benevolence (Balliet, Joireman, Daniels, & George-Falvy, 2008), which in turn causes a person to place greater weight on the outcomes for others (van Lange, 1999). Therefore, the arousal of moral emotions activates moral values, prompting greater other-interest motivation.
As such, moral emotions can be the determining factor in whether individuals maintain a collective rationality mindset. Specifically, moral emotions may act as a counterweight to the tendency of individuals to revert to a rational self-interested mindset. Decisions that require the consideration of multiple potential outcomes pertinent to a range of constituents can tax the limits of bounded rationality (Tenbrunsel, Diekmann, Wade-Benzoni, & Bazerman, 2010), rendering collective rationality difficult to maintain (de Dreu, 2006). Additionally, the act of engaging in rational reasoning is likely to make self-interests more salient and the outcome for others less salient as individuals engage in cost-benefit analysis (Tenbrunsel & Messick, 1999). Moral emotions increase the salience and value of outcomes for others, thereby offsetting the tendency to revert to self-serving judgments.
To explore this proposition, we conducted an experiment in which we manipulated moral emotions (empathy vs. neutral) and mode of reasoning (rational vs. heuristic). We expected that the activation of empathy as a moral emotion would mitigate the tendency of rational reasoning to lead to self-serving behavior. We manipulated moral emotion by having participants view and comment on a video that was either neutral in content or designed to arouse empathy. We primed participants on modes of reasoning by having them work on either a task that required cost-benefit analyses or a routine and mindless task.
Participants worked on an in-basket task and were asked in the last email to donate vacation time up to 12 days to an ailing co-worker, which served as the measure of prosocial behavior. The results indicated a significant interaction between moral emotion and mode of reasoning. As illustrated in Figure 2, in the absence of moral emotion, rational reasoning had a negative effect on prosocial behavior; but when moral emotion was aroused, rational reasoning led to more generous behavior. These results offer support for the idea that rational reasoning can lead to self-serving behavior, but this tendency is offset by moral emotions. The motivation to engage in complex consideration of outcomes for both the self and others may be weak when the stakes involved are low, as was the case in this experiment. To simplify the decision and reduce cognitive effort, the decision-maker may discount the interests of others or ignore them altogether, resulting in more self-serving choices. The role of moral emotions in this case is to further activate the corresponding moral value and enhance the salience and importance of others’ outcomes.
On the other hand, very high levels of moral emotion arousal may make it more difficult to maintain collective rationality. High levels of affective arousal can limit cognitive resources and compromise the complex cognitive operations of rational reasoning. Thus, the intensity of emotions is likely to tip the scale toward heuristic reasoning. Research indicates that areas of the brain associated with deliberative processing are activated when individuals are engaged in rational deliberation of moral judgments, whereas areas associated with quick, spontaneous processing are activated when individuals are exposed to moral-emotion-eliciting stimuli (Sevinc & Spreng, 2014). Similarly, research on deontic justice suggests that when individuals have an emotionally charged reaction to injustice, they respond through deontic reasoning, that is, they act to restore justice without regard to personal costs or benefits (Skarlicki & Kulik, 2005). Importantly, the restoration of justice may involve retaliatory behavior of which the perpetrator may not be aware (Skarlicki & Rupp, 2010; Turillo et al., 2002) and thus have little chance for correcting such unjust behavior. Thus, when strong emotions motivate deontic justice, the actions may serve neither the interests of the actor nor those of the broader organization.
The story of “Baby Jessica” provides an illustrative example. In the late 1980s, 18-month-old “Baby Jessica” became a sensationalized news story when she became trapped in a well. Arousing intense sympathy, individuals in the general public collectively donated a substantial sum of money to her family. But, as Loewenstein and Small (2007) point out, the generosity directed at that one family would have had a greater impact if directed toward broader causes, such as child poverty. Thus, high levels of arousal of moral emotions may trigger the use of heuristic reasoning that blinds individuals to the potential for maximizing their behavioral impact.
These findings suggest that there may be an optimum level of arousal of moral emotions necessary to sustain collective rationality. Thus, it is important to examine circumstances surrounding the inflection point for the arousal of moral emotions. Individual differences such as values and trait affectivity are likely to play a role in threshold levels. For example, Skarlicki, Folger, and Tesluk (1999) found that persons high in trait negative affectivity were more likely to respond to injustice with retaliatory behavior. Likewise, the valence of the moral emotion is likely to be important in determining thresholds. Negative emotions are experienced more intensely (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001), so the threshold for optimum arousal is liable to be lower for negative moral emotions such as anger. Such emotions are aroused when values are threatened or violated. Thus, when situations call for defending as opposed to affirming moral values, collective rationality potentially loses out to the heuristic-based, other-orientation mindset.
Finally, it is important to consider the combined effect of moral and hedonic emotions during the decision-making process. It is rare for emotions to be experienced in isolation; research suggests that individuals experience a blend of multiple emotions in response to a single situation (e.g., Larsen, McGraw, & Cacioppo, 2001). Consider the third-party witness to injustice in the workplace. A variety of conflicting emotions may be experienced simultaneously, both moral—such as anger and sympathy—and hedonic—such as fear and schadenfreude. As individual emotions may be aroused to different degrees in response to the same situation (Siemer, Mauss, & Gross, 2007), it is likely that the emotion felt most strongly will provide the largest motivational force (Fernando, Kashima, & Laham, 2014) and will thus promote a single value to dominate and shape the decision-making process. This view of complex emotional bundles explains the motivational underpinning that exists within theories of third-party responses to organizational injustice (e.g., O’Reilly & Aquino, 2011), which attempt to explain the multiplicity of responses that individuals have toward witnessing mistreatment. While the deontic model of injustice would suggest that witnessing injustice should incline individuals toward experiencing a negative moral emotion that would then promote retributive behaviors (Folger, 2001), ample evidence exists that suggests individuals engage in a wide variety of behaviors, such as withdrawal (e.g., Howard & Cordes, 2010) or propagation of further injustice (e.g., Mawritz, Mayer, Hoobler, Wayne, & Marinova, 2012). Such outcomes are possible due to the presence of strong hedonic emotions that overpower the presence of moral emotions.
Organizational stakeholders often have competing interests, and balancing the interests of all stakeholders is difficult. Considering the decision-making process as a combination of motivational orientation and mode of processing provides insight into how and why individuals make self-interested versus other-interested choices. By integrating emotions into the decision-making process, we uncover interesting insights into how individuals perceive situations, evaluate decisions, and ultimately choose which behaviors to enact. Our expanded framework of the theory of other-orientation suggests that moral values provide a counterbalance to the tension created by rational reasoning and self-interested motivation. Additionally, we suggest that the exploration of the arousal of moral emotions and the confluence of multiple emotions, both moral and hedonic, within the theory of other-orientation framework may be a fruitful avenue for future research on managing conflicts of interest in organizations.
Ahn, H. K., Kim, H. J., & Aggarwal, P. (2014). Helping fellow beings: Anthropomorphized social causes and the role of anticipatory guilt. Psychological Science, 25(1), 224–229.Find this resource:
Aknin, L. B., Dunn, E. W., Sandstrom, G. M., & Norton, M. I. (2013). Does social connection turn good deeds into good feelings? On the value of putting the “social” in prosocial spending. International Journal of Happiness and Development, 1(2), 155–171.Find this resource:
Andreoni, J. (1990). Impure altruism and donations to public goods: A theory of warm-glow giving. The Economic Journal, 100(401), 464–477.Find this resource:
Aquino, K., Freeman, D., Reed, I. I., Lim, V. K., & Felps, W. (2009). Testing a social-cognitive model of moral behavior: The interactive influence of situations and moral identity centrality. Journal of Personality and Social Psychology, 97(1), 123–141.Find this resource:
Aquino, K., & Reed, I. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423–1440.Find this resource:
Balliet, D., Joireman, J., Daniels, D., & George-Falvy, J. (2008). Empathy and the Schwartz value system: A test of an integrated hypothesis. Individual Differences Research, 6(4), 269–279.Find this resource:
Baron, J., & Ritov, I. (2009). Protected values and omission bias as deontological judgments. Psychology of Learning and Motivation, 50, 133–167.Find this resource:
Batson, C. D., Fultz, J., & Schoenrade, P. A. (1987). Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences. Journal of Personality, 55(1), 19–39.Find this resource:
Batson, C. D., O’Quin, K., Fultz, J., Vanderplas, M., & Isen, A. M. (1983). Influence of self-reported distress and empathy on egoistic versus altruistic motivation to help. Journal of Personality and Social Psychology, 45(3), 706–718.Find this resource:
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370.Find this resource:
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275, 1293–1295.Find this resource:
Bless, H. (2000). The interplay of affect and cognition: The mediating role of general knowledge structures. In J. P. Forgas (Ed.), Feeling and thinking: The role of affect in social cognition (pp. 153–177). Cambridge, U.K.: Cambridge University Press.Find this resource:
Bless, H., Clore, G. U., Schwarz, N., Golisano, V., Rabe, C., & Wolk, M. (1996). Mood and the use of scripts: Does a happy mood really lead to mindlessness? Journal of Personality and Social Psychology, 71, 665–679.Find this resource:
Bodenhausen, G. V. (1993). Emotions, arousal, and stereotypic judgments: A heuristic model of affect and stereotyping. In D. M. Mackie & D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive processes in group perception (pp. 13–37). San Diego, CA: Academic Press.Find this resource:
Bolino, M. C., Kacmar, K. M., Turnley, W. H., & Gilstrap, J. B. (2008). A multi-level review of impression management motives and behaviors. Journal of Management, 34(6), 1080–1109.Find this resource:
Brewer, M. B. (1999). The psychology of prejudice: Ingroup love and outgroup hate? Journal of Social Issues, 55(3), 429–444.Find this resource:
Brewer, M. B., & Kramer, R. M. (1986). Choice behavior in social dilemmas: Effects of social identity, group size, and decision framing. Journal of Personality and Social Psychology, 50(3), 543–549.Find this resource:
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131.Find this resource:
Cain, D. M., Dana, J., & Newman, G. E. (2014). Giving versus giving in. Academy of Management Annals, 8(1), 505–533.Find this resource:
Chaiken, S., & Trope, Y. (Eds.). (1999). Dual-process theories in social psychology. New York: Guilford Press.Find this resource:
Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology (pp. 73–96). New York: Guildford Press.Find this resource:
Chen, S., Shechter, D., & Chaiken, S. (1996). Getting at the truth or getting along: Accuracy versus impression-motivated heuristic and systematic processing. Journal of Personality and Social Psychology, 71(2), 262–275.Find this resource:
Cojuharenco, I., Shteynberg, G., Gelfand, M., & Schminke, M. (2012). Self-construal and unethical behavior. Journal of Business Ethics, 109(4), 447–461.Find this resource:
Colman, A. M., Pulford, B. D., & Rose, J. (2008). Collective rationality in interactive decisions: Evidence for team reasoning. Acta Psychologica, 128(2), 387–397.Find this resource:
Cosmides, L., & Tooby, J. (1989). Evolutionary psychology and the generation of culture, Part II. Case study: A computational theory of social exchange. Ethology and Sociobiology, 10, 51–97.Find this resource:
Cropanzano, R., Goldman, B., & Folger, R. (2003). Deontic justice: The role of moral principles in workplace fairness. Journal of Organizational Behavior, 24(8), 1019–1024.Find this resource:
Cropanzano, R. S., Massaro, S., & Becker, W. J. (2017). Deontic justice and organizational neuroscience. Journal of Business Ethics, 144(4), 733–754.Find this resource:
Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31(1), 169–193.Find this resource:
Dawes, R. M. (1988). Rational choice in an uncertain world. San Diego, CA: Harcourt Brace Jovanovich.Find this resource:
de Dreu, C. K. W. (2006). Rational self-interest and other orientation in organizational behavior: A critical appraisal and extension of Meglino and Korsgaard (2004). Journal of Applied Psychology, 91(6), 1245–1252.Find this resource:
de Dreu, C. K. W. (2007). Cooperative outcome interdependence, task reflexivity, and team effectiveness: A motivated information processing perspective. Journal of Applied Psychology, 92(3), 628–638.Find this resource:
de Dreu, C. K. W., Beersma, B., Stroebe, K., & Euwema, M. C. (2006). Motivated information processing, strategic choice, and the quality of negotiated agreement. Journal of Personality and Social Psychology, 90(6), 927–943.Find this resource:
de Dreu, C. K. W., & Nauta, A. (2009). Self-interest and other-orientation in organizational behavior: Implications for job performance, prosocial behavior, and personal initiative. Journal of Applied Psychology, 94(4), 913–926.Find this resource:
de Dreu, C. K. W., Weingart, L. R., & Kwon, S. (2000). Influence of social motives on integrative negotiation: A meta-analytic review and test of two theories. Journal of Personality and Social Psychology, 78(5), 889–905.Find this resource:
de Hooge, I. E., Nelissen, R. M. A., Breugelmans, S. M., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality and Social Psychology, 100(3), 462–473.Find this resource:
Driskell, J. E., & Johnston, J. H. (1998). Stress exposure training. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 191–217). Washington, DC: American Psychological Association.Find this resource:
Eisenberg, N. (2000). Emotion, regulation, and moral development. Annual Review of Psychology, 51(1), 665–697.Find this resource:
Eisenberg, N., & Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101(1), 91–119.Find this resource:
Evans, J. S. B. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7(10), 454–459.Find this resource:
Evans, J. S. B. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278.Find this resource:
Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241.Find this resource:
Fernando, J. W., Kashima, Y., & Laham, S. M. (2014). Multiple emotions: A person-centered approach to the relationship between intergroup emotion and action orientation. Emotion, 14(4), 722–732.Find this resource:
Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw- Hill.Find this resource:
Flynn, F. J. (2003). How much should I give and how often? The effects of generosity and frequency or favor exchange on social status and productivity. Academy of Management Journal, 46(5), 539–553.Find this resource:
Folger, R. (2001). Fairness as deonance. In S. W. Gilliland, D. D. Steiner, & D. P. Skarlicki (Eds.), Research in social issues in management (pp. 3–31). Charlotte, NC: Information Age.Find this resource:
Folger, R., & Skarlicki, D. P. (2008). The evolutionary bases of deontic justice. In S. W. Gilliland, D. D. Steiner, & D. P. Skarlicki (Eds.), Justice, morality, and social responsibility (pp. 29–62). Charlotte, NC: Information Age.Find this resource:
Frijda, N. H. (1993). Moods, emotion episodes and emotions. In M. Lewis & J. M. Haviland-Jones (Eds.), Handbook of emotions (pp. 381–403). New York: Guilford Press.Find this resource:
Gagné, M., & Deci, E. L. (2005). Self‐determination theory and work motivation. Journal of Organizational Behavior, 26(4), 331–362.Find this resource:
Gardner, W. L., Gabriel, S., & Lee, A. Y. (1999). “I” value freedom, but “we” value relationships: Self-construal priming mirrors cultural differences in judgment. Psychological Science, 10(4), 321–326.Find this resource:
George, J. M. (1991). State or trait: Effects of positive mood on prosocial behaviors at work. Journal of Applied Psychology, 76(2), 299–307.Find this resource:
Gibson, R., Tanner, C., & Wagner, A. F. (2013). Preferences for truthfulness: Heterogeneity among and within individuals. American Economic Review, 103(1), 532–48.Find this resource:
Greene, J. D. (2009). Dual-process morality and the personal/impersonal distinction: A reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of Experimental Social Psychology, 45(3), 581–584.Find this resource:
Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford: Oxford University Press.Find this resource:
Haidt, J., & Kesebir, S. (2010). Morality. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (pp. 797–832). Hoboken, NJ: John Wiley & Sons.Find this resource:
Halbesleben, J. R. B., Harvey, J., & Bolino, M. C. (2009). Too engaged? A conservation of resources view of the relationship between work engagement and work interference with family. Journal of Applied Psychology, 94(6), 1452–1465.Find this resource:
Hosmer, L. T. (1995). Trust: The connecting link between organizational theory and philosophical ethics. Academy of Management Review, 20(2), 379–403.Find this resource:
Howard, L. W., & Cordes, C. L. (2010). Flight from unfairness: Effects of perceived injustice on emotional exhaustion and employee withdrawal. Journal of Business and Psychology, 25(3), 409–428.Find this resource:
Hoyt, C. L., & Price, T. L. (2015). Ethical decision making and leadership: Merging social role and self-construal perspectives. Journal of Business Ethics, 126(4), 531–539.Find this resource:
Izard, C. E. (1977). Human emotions. New York: Plenum Press.Find this resource:
Izard, C. E. (1989). The structure and functions of emotions: Implications for cognition, motivation, and personality. In I. S. Cohen (Ed.), The G. Stanley Hall lecture series (Vol. 9, pp. 39–73). Washington, DC: American Psychological Association.Find this resource:
Jensen, M. C. (2002). Value maximization, stakeholder theory, and the corporate objective function. Business Ethics Quarterly, 12(2), 235–256.Find this resource:
Joireman, J. A., Kuhlman, D. M., van Lange, P. A., Doi, T., & Shelley, G. P. (2003). Perceived rationality, morality, and power of social choice as a function of interdependence structure and social value orientation. European Journal of Social Psychology, 33(3), 413–437.Find this resource:
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). New York: Cambridge University Press.Find this resource:
Kahneman, D., Knetsch, J. L., & Thaler, R. (1986). Fairness as a constraint on profit seeking: Entitlements in the market. American Economic Review, 76(4), 728–741.Find this resource:
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350.Find this resource:
Keltner, D., Ellsworth, P. C., & Edwards, K. (1993). Beyond simple pessimism: Effects of sadness and anger on social judgment. Journal of Personality and Social Psychology, 64(5), 740–752.Find this resource:
Kohlberg, L. (1969). Stage and sequence: The cognitive developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Chicago: Rand McNally.Find this resource:
Korsgaard, M. A., Meglino, B. M., Lester, S. W., & Jeong, S. S. (2010). Paying you back or paying me forward: Understanding rewarded and unrewarded organizational citizenship behavior. Journal of Applied Psychology, 95(2), 277–290.Find this resource:
Kruglanski, A. W., Pierro, A., Mannetti, L., & De Grada, E. (2006). Groups as epistemic providers: Need for closure and the unfolding of group-centrism. Psychological Review, 113(1), 84–100.Find this resource:
Larsen, J. T., McGraw, A. P., & Cacioppo, J. T. (2001). Can people feel happy and sad at the same time? Journal of Personality and Social Psychology, 81(4), 684–696.Find this resource:
Lerner, J. S., Goldberg, J. H., & Tetlock, P. E. (1998). Sober second thought: The effects of accountability, anger, and authoritarianism on attributions of responsibility. Personality and Social Psychology Bulletin, 24(6), 563–574.Find this resource:
Lerner, J. S., & Keltner, D. (2001). Fear, anger, and risk. Journal of Personality and Social Psychology, 81(1), 146–159.Find this resource:
Loewenstein, G., & Small, D. (2007). The Scarecrow and the Tin Man: The vicissitudes of human sympathy and caring. Review of General Psychology, 11(2), 112–126.Find this resource:
Mawritz, M. B., Mayer, D. M., Hoobler, J. M., Wayne, S. J., & Marinova, S. V. (2012). A trickle-down model of abusive supervision. Personnel Psychology, 65, 325–357.Find this resource:
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.Find this resource:
McClelland, D. C. (1988). Human motivation. Cambridge, U.K.: Cambridge University Press.Find this resource:
Meglino, B. M., & Korsgaard, M. A. (2004). Considering rational self-interest as a disposition: Organizational implications of other orientation. Journal of Applied Psychology, 89(6), 946–959.Find this resource:
Meglino, B. M., & Korsgaard, M. A. (2006). Considering situational and dispositional approaches to rational self-interest: An extension and response to de Dreu (2006). Journal of Applied Psychology, 91(6), 1253–1259.Find this resource:
Molinsky, A. L., Grant, A. M., & Margolis, J. D. (2012). The bedside manner of homo economicus: How and why priming an economic schema reduces compassion. Organizational Behavior and Human Decision Processes, 119(1), 27–37.Find this resource:
Nelissen, R. M. A., Dijker, A. J. M., & de Vries, N. K. (2007). Emotions and goals: Assessing relations between values and emotions. Cognition and Emotion, 21(4), 902–911.Find this resource:
O’Reilly, J., & Aquino, K. (2011). A model of third parties’ morally motivated responses to mistreatment in organizations. Academy of Management Review, 36(3), 526–543.Find this resource:
Pfeffer, J., & Sutton, R. I. (2006). Hard facts, dangerous half-truths, and total nonsense: Profiting from evidence-based management. Boston: Harvard Business School Press.Find this resource:
Ravlin, E. C., & Meglino, B. M. (1987). Effect of values on perception and decision making: A study of alternative work values measures. Journal of Applied Psychology, 72(4), 666–673.Find this resource:
Ravlin, E. C., & Meglino, B. M. (1989). The transitivity of work values: Hierarchical preference ordering of socially desirable stimuli. Organizational Behavior and Human Decision Processes, 44(3), 494–508.Find this resource:
Reynolds, S. J. (2006). A neurocognitive model of the ethical decision-making process: Implications for study and practice. Journal of Applied Psychology, 91(4), 737–748.Find this resource:
Ruff, C. C., & Fehr, E. (2014). The neurobiology of rewards and values in social decision making. Nature Reviews Neuroscience, 15(8), 549–562.Find this resource:
Schnall, S., Roper, J., & Fessler, D. M. (2010). Elevation leads to altruistic behavior. Psychological Science, 21(3), 315–320.Find this resource:
Schwartz, M. S. (2005). Universal moral values for corporate codes of ethics. Journal of Business Ethics, 59(1/2), 27–44.Find this resource:
Schwartz, S. H. (1994). Are there universal aspects in the structure and contents of human values? Journal of Social Issues, 50(4), 19–45.Find this resource:
Schwartz, S. H. (2012). An overview of the Schwartz theory of basic values. Online Readings in Psychology and Culture, 2(1), 11–31.Find this resource:
Schwarz, N. (1990). Feelings as information: Informational and motivational functions of affective states. In E. T. Higgins & R. M. Sorrentino (Eds.), Handbook of motivation and cognition: Foundations of social behavior (Vol. 2, pp. 527–561). New York: Guilford Press.Find this resource:
Schwarz, N. (2000). Emotion, cognition, and decision making. Cognition and Emotion, 14(4), 433–440.Find this resource:
Schwarz, N., & Clore, G. L. (1983). Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states. Journal of Personality and Social Psychology, 45(3), 513–523.Find this resource:
Scott, E. D. (2000). Moral values: Situationally defined individual differences. Business Ethics Quarterly, 10(2), 497–520.Find this resource:
Sevinc, G., & Spreng, R. N. (2014). Contextual and perceptual brain processes underlying moral cognition: A quantitative meta-analysis of moral reasoning and moral emotions. PloS ONE, 9(2), e87427.Find this resource:
Sheppes, G., Scheibe, S., Suri, G., & Gross, J. J. (2011). Emotion-regulation choice. Psychological Science, 22(11), 1391–1396.Find this resource:
Siemer, M., Mauss, I., & Gross, J. J. (2007). Same situation-different emotions: How appraisals shape our emotions. Emotion, 7(3), 592–600.Find this resource:
Simons, T. (2002). Behavioral integrity: The perceived alignment between managers’ words and deeds as a research focus. Organization Science, 13(1), 18–35.Find this resource:
Skarlicki, D. P., Folger, R., & Tesluk, P. (1999). Personality as a moderator in the relationship between fairness and retaliation. Academy of Management Journal, 42(1), 100–108.Find this resource:
Skarlicki, D. P., & Kulik, C. T. (2005). Third-party reactions to employee (mis)treatment: A justice perspective. Research in Organizational Behavior, 26, 183–229.Find this resource:
Skarlicki, D. P., & Rupp, D. E. (2010). Dual processing and organizational justice: The role of rational versus experiential processing in third-party reactions to workplace mistreatment. Journal of Applied Psychology, 95(5), 944–952.Find this resource:
Spence, J. R., Brown, D. J., Keeping, L. M., & Lian, H. (2014). Helpful today, but not tomorrow? Feeling grateful as a predictor of daily organizational citizenship behaviors. Personnel Psychology, 67(3), 705–738.Find this resource:
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645–665.Find this resource:
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8, 220–247.Find this resource:
Sun, R., Slusarz, P., & Terry, C. (2005). The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review, 112(1), 159–192.Find this resource:
Tamir, M., Schwartz, S. H., Cieciuch, J., Riediger, M., Torres, C., Scollon, C., . . . Vishkin, A. (2016). Desired emotions across cultures: A value-based account. Journal of Personality and Social Psychology, 111(1), 67–82.Find this resource:
Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345–372.Find this resource:
Tenbrunsel, A. E., Diekmann, K. A., Wade-Benzoni, K. A., & Bazerman, M. H. (2010). The ethical mirage: A temporal explanation as to why we aren’t as ethical as we think we are. Research in Organizational Behavior, 30, 153–173.Find this resource:
Tenbrunsel, A. E., & Messick, D. M. (1999). Sanctioning systems, decision frames, and cooperation. Administrative Science Quarterly, 44(4), 684–707.Find this resource:
Tiedens, L. Z., & Linton, S. (2001). Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. Journal of Personality and Social Psychology, 81(6), 973–988.Find this resource:
Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge, U.K.: Cambridge University Press.Find this resource:
Turillo, C. J., Folger, R., Lavelle, J. J., Umphress, E. E., & Gee, J. O. (2002). Is virtue its own reward? Self-sacrificial decisions for the sake of fairness. Organizational Behavior and Human Decision Processes, 89(1), 839–865.Find this resource:
van de Vyver, J., & Abrams, D. (2015). Testing the prosocial effectiveness of the prototypical moral emotions: Elevation increases benevolent behaviors and outrage increases justice behaviors. Journal of Experimental Social Psychology, 58, 23–33.Find this resource:
van Eerde, W., & Thierry, H. (1996). Vroom’s expectancy models and work-related criteria: A meta-analysis. Journal of Applied Psychology, 81(5), 575–586.Find this resource:
van Lange, P. A. M. (1999). The pursuit of joint outcomes and equality in outcomes: An integrative model of social value orientation. Journal of Personality and Social Psychology, 77(2), 337–349.Find this resource:
Van Lange, P. A. M. (2008). Does empathy trigger only altruistic motivation? How about selflessness or justice? Emotion, 8(6), 766–774.Find this resource:
van’t Wout, M., Kahn, R. S., Sanfey, A. G., & Aleman, A. (2006). Affective state and decision-making in the ultimatum game. Experimental Brain Research, 169(4), 564–568.Find this resource:
Wang, L., Zhong, C. B., & Murnighan, J. K. (2014). The social and ethical consequences of a calculative mindset. Organizational Behavior and Human Decision Processes, 125(1), 39–49.Find this resource:
Wansink, B., & Sobal, J. (2007). Mindless eating: The 200 daily food decisions we overlook. Environment and Behavior, 39(1), 106–123.Find this resource:
West, R. F., & Stanovich, K. E. (2003). Is probability matching smart? Associations between probabilistic choices and cognitive ability. Memory & Cognition, 31(2), 243–251.Find this resource:
Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780–784.Find this resource:
Xu, Z. X., & Ma, H. K. (2016). How can a deontological decision lead to moral behavior? The moderating role of moral identity. Journal of Business Ethics, 137(3), 537–549.Find this resource:
Young, L., Cushman, F., Hauser, M., & Saxe, R. (2007). The neural basis of the interaction between theory of mind and moral judgment. Proceedings of the National Academy of Sciences, 104(20), 8235–8240.Find this resource: