Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Politics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Google Scholar Indexing; date: 25 April 2024

Process Tracing Methods in the Social Sciencesunlocked

Process Tracing Methods in the Social Sciencesunlocked

  • Derek BeachDerek BeachDepartment of Political Science, Aarhus University


Process tracing (PT) is a research method for studying how causal processes work using case study methods. PT can be used for both case studies that aim to gain a greater understanding of the causal dynamics that produced the outcome of a particular historical case and to shed light on generalizable causal mechanisms linking causes and outcomes within a population of cases. PT as a method has three core components: theorization about causal mechanisms linking causes and outcomes, the analysis of the observable empirical manifestations of theorized mechanisms, and questions of case selection and generalization. There are at least three distinct variants of PT that stem from differences in how scholars understand the ontological nature of the theories being traced. This means there is not one “correct” way to use PT.


  • Qualitative Political Methodology

Updated in this version

The title, summary, and keywords have all been updated. Additionally, new figures, tables, and text have been added to reflect the latest developments.


Process tracing (PT) is a research method for tracing causal processes using case studies. PT can be used to investigate questions such as how low intelligence capacity of security forces can contribute to produce mass violence against civilians (Winward, 2021) and how an epistemic community can gain influence over policy (Löblová, 2018). The analytical added value of PT is that it enables causal inferences to be made about how causal processes/causal mechanisms work using in-depth analysis of one or a small number of cases.1 Furthermore, when more disaggregated process theories are traced empirically, light is shed on the contextual conditions under which particular processes operate. The trade-off is that not many cases can be traced because it is difficult and time-consuming to study how things work in particular cases using PT.

PT methods can be used for either theory-building or theory-testing purposes. In theory-building mode, the researcher engages both in a thorough “soaking and probing” of the empirics of the case and in a far-reaching search in the theoretical literature to gain clues about potential mechanisms that could link a cause and outcome together, whereas in theory-testing mode, hypotheses about the observable manifestations that a theorized mechanism might leave are tested empirically in a case. In reality, most users of PT engage in a more iterated design that moves back-and-forth between theories and empirics (aka abductive design), with the end product an evidenced process theory that explains how a cause (or set of causes) is linked to an outcome in a case or set of cases.

PT as a method has two very obvious constituent elements: what is being traced at the theoretical level and how processes can be traced empirically. But in contrast to some methods, there is not one “correct” way to do PT because there are considerable differences between how scholars understand the two elements. These differences at the ontological and epistemological levels result in three different variants of PT methods, depicted in Table 1.

At the level of theory of causal processes/causal mechanisms, the core distinctions are ontological. First, there is a more superficial divide in the literature regarding the level of abstraction that process theories should have, ranging from simple, one-liner-type theoretical explanations to more complex, and even case-specific, process theories. Other things equal, the lower the level of abstraction of the process theory, the more knowledge is gained about how things worked at the theoretical level, and stronger causal inferences about process are possible at the empirical level. However, this does not mean that more detail is always better. After an initial, in-depth case study that traces a detailed process theory in one case, it can be helpful to lift the level of theoretical abstraction to ease the task of assessing whether things worked in a similar fashion in a number of other cases.

More fundamental divides exist with regard to the ontological nature of the causal claims that are made when discussing causal processes and mechanisms. Here, there are (at least) three distinct positions: (a) mechanisms are in essence counterfactual causal claims, (b) mechanisms are causal because of the productive relationships of actors engaging in activities that link causes and outcomes together in cases, and (c) a more interpretivist position that contends that process theories are causal when they are able to capture how social actors construct and reconstruct the social reality they are a part of.

At the empirical level, the distinctions between counterfactual, productive, and social constructivists accounts of the nature of causation have epistemological implications for how causal processes should be traced. When tracing counterfactual claims, some form of controlled comparison of the actual with the potential counterfactual is required to be able to infer that the mechanism made a difference. In contrast, the productive account is sometimes termed the actualist position because causal inferences are made based on assessing whether the expected observable traces left by the activities of actors are present in an actual case. In the social constructivist account, more interpretivist methods are also utilized (sometimes as supplements, sometimes exclusively) to understand how social actors make sense of the practices of actors and social context in which they are embedded (i.e., meaning-making).

Whereas some scholars take a “my-way-or-the-highway,” methodological monist approach to questions related to causation and/or epistemology (e.g., Bevir & Blakely, 2018; King et al., 1994), a methodological pluralist position is recognizing and understanding that different positions are possible and that different variants have relative strengths and weaknesses in different research situations (Beach, 2021; Runhardt, 2021).2 For instance, if one believes that it is important to capture how social actors understand the diplomatic practices in which they are engaging, a more interpretive variant of PT would have comparative strengths. In contrast, when studying relatively “simple” processes that are repeated frequently (e.g., problem-solving processes in a team within an organization), a counterfactual-based variant that makes inferences through controlled comparisons has relative strengths (Runhardt, 2021, pp. 13–14). Methodological pluralism does not mean that anything goes. Instead, what is important is that there is alignment between the underlying ontological positions and the epistemology used to assess them in the PT research design (Beach & Kaas, 2020; Hall, 2003).

Unfortunately, there are also examples of published studies in the social sciences in which scholars claim to be engaging in PT but instead merely pay lip service to the method by providing a citation or two in the Methods section, followed by a descriptive narrative of events in a case without linking to an explicit mechanistic theory. An atheoretical description of a sequence of events in a case is a form of narrative analysis that tells who did what and when, but it does not tell why they did it and, most important, why the events were linked in a causal sense. Although a descriptive narrative of what happened can be an important first step in any PT analysis, an atheoretical tracing of events does not shed light on the underlying causal linkage between a cause and an outcome, which is why a narrative description of a process is not the same thing as PT (e.g., Oppermann & Spencer, 2016). Fortunately, there are also numerous examples of scholars who implement a PT research design that lives up to the best practices of one of the variants of PT. This article discusses only these studies.

This article proceeds in three steps. First, it introduces the ontological distinctions about what is being traced in PT. Second, it discusses the different positions with regard to how processes and mechanisms can be traced empirically. Third, it discusses issues related to case selection and generalization of process theories in the different variants of PT.

What Is Being Traced? Causal Processes and Mechanisms

PT research probes the theoretical causal mechanisms (i.e., processes) linking causes and outcomes together. This section first discusses differing levels of aggregation of processual theories, followed by a review of the more fundamental differences in how scholars understand the nature of mechanistic causal claims.

Levels of Abstraction of Process Theories

Process tracers work with theories at varying levels of abstraction, ranging from minimalist, one-liner-type theories to detailed, case-specific theories that attempt to capture the particularities of how causal processes played out in a historical case.3 In minimalist theories, the causal arrow between a cause and outcome is not theorized in any detail (Elster, 1998; Goertz, 2017). Minimalist process theories typically take the form of one-liner-type theories of the pathway linking a cause and outcome together. An example is seen in Tannenwald’s (1999) article on the nuclear taboo. Tannenwald theorizes three possible pathways that can link norms and the non-use of nuclear weapons; these are depicted in Figure 1.

Figure 1. Simple, abstract causal process theories and the nuclear taboo.

Source: Tannenwald (1999, p. 462).

The process theorization in Tannenwald’s (1999) article does not go beyond very abstract one-liners, meaning that what is actually going on theoretically within the arrow(s) remains in a black box. Very abstract terms are used to theorize causal processes, such as “constraint on self-interested decision maker” (Tannenwald, 1999, p. 462), but readers are not told more about how these constraints actually work. Do decision makers have to discuss potential use of nuclear weapons among themselves, thereby providing opponents of usage with arguments they can deploy through normative speech acts to shame other actors? Or do the norms mean that decision makers never even discuss nuclear use because they believe it is “wrong”?

The goal of disaggregating causal processes into their constituent parts is to better understand how they work. This requires lowering the level of theoretical abstraction by providing a more-or-less detailed theorization of the actors involved and the activities that provide the causal linkages in the process (Beach & Pedersen, 2019). In the social sciences, actors can be micro-level (i.e., individuals) or macro-level (i.e., collective social actors), with the requirement being that the latter have properties and orientations that enable them to do things that can impact other actors in a process. Activities are what social actors actually do; activities can take the form of speech acts, voting, paying bribes, etc.

It is important to note that a more detailed process theory will still be an analytical abstraction for all but the simplest of processes. Instead of detailing each and every individual and what they are doing in their interactions with each other during a period of time (days, weeks, or months), a detailed process theory attempts to capture the central actors and simplify their activities by only focusing on the most critical interactions. Using a metaphor, not all parts of a movie are equally “interesting.” In an action movie, screen time and special effects money will be concentrated on the final dramatic showdown between the good and bad persons. Similarly, not all parts of a process theory are equally “interesting,” and it can be warranted to focus on the most interesting parts (Steel, 2008). These are the critical causal linkages in the process, involving interactions between actors.

Table 2 illustrates a disaggregated process theory used by Winward (2021) to explain the process whereby low intelligence capacities of security forces in a conflict area can lead to mass categorical violence against particular groups. Although the process theory is unpacked into several steps, it is still an analytical abstraction that includes only the most critical steps and linkages without theorizing everything that actors are doing in their interactions with each other. In the theorized process, the cause (low intelligence capacity in a conflict situation) spurs the security forces to approach local civilian elites for assistance in gathering information on threats (Part 1). The local civilian elites then exploit this dependence to settle scores in relation to pre-existing local conflicts with a particular group by providing false information targeted against individuals from the group, and also by encouraging other locals to take matters into their own hands by perpetrating violence against members of the targeted group (Part 2). The security forces use the (false) information provided to detain and interrogate individuals from the targeted group, resulting in an escalating cycle of torture and violence in which the many false confessionals from torture lead to even more detained and tortured individuals (Part 3). Furthermore, an increase in the number of detainees strains the capacities of the security forces, leading them to take extreme steps such as extrajudicial killings to clear out prisons (Part 3). Taken together, the process produces a marked increase in mass categorical violence by state security forces targeted against a particular group.

In contrast to minimalist one-liners, what is going on in-between is more explicitly theorized. This does not mean that disaggregated process theories are necessarily better than minimalist theories. Maintaining a high level of abstraction is a methodological choice that can be warranted in several research situations. First, early in research of a topic, there might be considerable uncertainty about which pathway links a cause and outcome together. Here, a PT study that takes the form of a plausibility probe exploring whether there is any evidence of a particular linkage can be a useful first step before more detailed process theories are traced empirically. Second, when research is more focused on investigating associations between causes and outcomes across many cases, a study that provides confirming evidence of a pathway linking them together makes it more plausible that the found association is actually causal. In the research situation faced by Tannenwald (1999), staying at a very high level of abstraction was warranted because there was a low prior confidence in the existence of any form of causal process linking norms and non-use (p. 438). In addition, minimalist studies can be useful after a series of more disaggregated studies to explore the scope of potential process generalizations. In contrast, when the goal of research is focused on understanding how the process worked in a historical case, the theorized process typically has numerous steps and can even include case-specific elements (e.g., particular named actors and specific activities that they perform).

Different Understandings of the Causal Nature of Mechanistic Claims

Causal mechanisms are one of the most widely used but also least understood types of causal claim in the social sciences (e.g. Beach, 2021; Brady, 2008; Gerring, 2010; Hedström & Ylikoski, 2010). The essence of making a mechanism-based claim is that the analytical focus shifts from causes and outcomes to the process in-between that links them. That is, mechanisms are not causes but, rather, are causal processes that are triggered by causes in particular contexts and that provide the causal linkage with outcomes. However, beyond this core point, there is disagreement among process tracers about the ontological nature of mechanisms as causal claims. There are (at least) three distinct understandings, all of which imply different epistemological strategies for PT research.

A counterfactual-based understanding of mechanisms defines causation as a situation in which a cause (or causal process/mechanism) is related to an outcome because its absence results in the absence of the outcome, all other things held equal (Morgan & Winship, 2007). The term potential outcomes framework is often used to denote counterfactual claims because one can only assess whether a factor is causal by comparing what actually took place with what potentially could have taken place when it was absent, either in comparable cases or in logical hypotheticals (Aviles & Reed, 2017, p. 722; Mahoney & Barrenechea, 2019; Runhardt, 2015). Conceptually, counterfactual-based mechanism claims are often theorized in minimalist terms as XMY (e.g., Mahoney, 2015).

In contrast, in the productive account, a mechanism is causal when there is a sequence of actors engaging in activities that transmit causal forces from the cause to the outcome (Beach & Pedersen, 2019; Clarke et al., 2014; Machamer et al., 2000). At its core, a mechanistic explanation attempts to explain theoretically how things work within a case or set of cases within a particular context (Cartwright, 2011) by unpacking the activities and the causal linkages they provide to explain why the cause is linked to the outcome. Mechanistic theories are typically viewed as systems in which the spatiotemporal organization of actors and the activities that they perform matter for how the systems works, as does the context within which the systems operate. Taken as a whole, a causal process is viewed in the productive account as more than the sum of its parts, and how it operates is very sensitive to context (Cartwright, 2011; Falleti & Lynch, 2009; Sawyer, 2004). This means that one cannot just “remove” a part of a process and replace it with some other actor doing something else without changing the rest of the process. Nor can one claim that just because it has been found to work in one case, it should work everywhere.

In the productive account, causal claims are made about actual causal linkages as they operate within single cases. However, there is disagreement within the productive understanding on the question of whether a causal process can occur only once and still be termed causal (i.e., singular causation) (Beach & Pedersen, 2019; Cartwright, 2021) or whether there has to be some form of regularity in its operation across cases before the term causal can be applied (e.g., Andersen, 2012). Both positions are logically defensible, but they point research in different directions. Thinking in more singular causal terms leads to research aimed at understanding the complexities of how processes worked in a single case, in which process theories are case-specific. In contrast, accepting some form of regularity as a prerequisite for making mechanistic causal claims implies research that has the ambition of making contingent generalizations about mechanisms across a set of cases.

A third understanding of mechanisms takes as its ontological point of departure the claim that the social world is fundamentally different from the natural world. This view is shared among social constructivist (Guzzini, 2017; Norman, 2016) and critical realist scholars (Danermark et al., 2019; Sayer, 2000). At the ontological level, how actors understand their actions and those of others, and how they understand the social context in which they are embedded, matters for how causal processes play out. This means that processual theorization needs to take seriously the intersubjective understandings and meaning-making of social actors in a particular social context. Pouliot (2014) develops an interpretive variant of PT that he terms “practice-tracing,” in which practices are defined as habitual patterns of actions performed by actors. Norman (2016, 2021) puts forward an interpretivist variant in which causal processes are nested in what he terms “constitutive” explanations. The constitutive element is the structural conditions, latent dispositions, and causal capacities that come from a given social context, whereas the causal element is the moving parts. He defines causality as the relations between events, focusing on “how specific actions and events can alter the context in which they appear” (Norman, 2016, p. 87).

How Can Process Theories Be Traced Empirically?

There are (at least) three different positions taken on the question of epistemology in the literature that stem from differences in how scholars understand what is being traced: (a) controlled comparisons across cases at the level of process (or parts thereof) that use either real-world cases or logical hypotheticals, (b) evidencing based on the observable traces they leave within cases, and (c) a more interpretive variant of PT that studies processes using interpretive methods such as ethnographies and interpretive interviewing to examine discourses and practices.

Controlled Cross-Case Comparisons

In the controlled comparison variant, evidence takes the form of the difference that the presence/absence of a pathway (or parts thereof) has for the outcome in two or more cases. In effect, this means that the pathway is treated as a counterfactual claim evidenced empirically by measuring the difference that presence/absence makes across the cases (Mahoney & Barrenechea, 2019; Runhardt, 2015, 2021, p. 9). Two or more “most similar” cases are compared that are similar in all respects except whether the pathway is present or absent. The cases selected for comparison can be either both real-world cases or involve a comparison with a nonexistent, logical hypothetical case (a “what if” case) (Levy, 2015; Mahoney & Barrenechea, 2019).

If the most similar comparison finds that the outcome was present in both cases despite the process being present in one case and absent in the other, this would disconfirm the theorized process as the causal linkage, and vice versa. Note that when the PT variant of controlled comparisons operates with real-world cases, only a small number of cases are used in order to strategically select cases that are as most similar as possible. This marks a methodological difference with large-n mediation analysis, in which one or more observables of a process (i.e., causal mechanisms) are used as indicators that are tested in a controlled comparison across a large number of cases to assess the difference that presence/absence makes (Imai et al., 2011).

The controlled comparison method is illustrated in Figure 2, based on Runhardt’s (2015) description of what a most similar comparison design could look like. Using Bakke’s (2013) study of the impact of transnational insurgents in the Second Chechen War, Runhardt suggests that to be able to assess whether the process of watching videos of suicide bombings actually made a difference, the Second Chechen War case should have been compared with another case in which the only difference is that the process is not present. If the difference in presence/absence of the pathway is found to have made a difference, then the conclusion can be made that the pathway produced the difference. The strength of this conclusion depends on the degree to which other factors are similar between the cases.

Figure 2. A controlled comparison of pathway “watching videos.”

Source: Inspired by Runhardt (2015).

In principle, the same form of most similar controlled comparison could be used for assessing disaggregated process theories, although the comparison would have to be repeated for each step of the process theory. Irrespective of whether process theories are disaggregated or not, the core challenge is finding real-world cases in which everything else is so similar that it can be claimed that the only meaningful difference is the presence/absence of the process itself. Therefore, many case study scholars argue for the utility of logical hypotheticals where there are the fewest changes to the actual world as possible (Mahoney & Barrenechea, 2019, p. 316). Levy (2008) writes, ‘Counterfactual analysis ideally posits an alternative world that is identical to the real world in all theoretically relevant aspects but one, in order to explore the consequences of that difference” (p. 634).

Tracing the Within-Case Observable Manifestations of Processes

An alternative approach to evidencing processes is to trace them empirically through the observable, within-case manifestations of either pathways taken as a whole (minimalist one-liners) or the traces left by the activities performed by actors in each part of a disaggregated process. In the literature, terms such as “causal process observations” (Brady & Collier, 2011), “diagnostic evidence” (Bennett & Checkel, 2014, p. 7), or “mechanistic evidence” are also used to denote the observable manifestations left by processes playing out within cases (Beach & Pedersen, 2019; Clarke et al., 2014). Observable manifestations can be thought of as the empirical fingerprints left by the operation of a causal process in a case, of if disaggregated, the fingerprints left by the activities of actors that are linkages in the process. Common to all of these terms is that the evidence used does not involve a controlled comparison across two or more cases but instead is any type of empirical material that has probative value in relation to determining whether the process actually took place within a case. These traces can take many different forms. For example, they can involve patterns within cases, such as social networks that emerge between actors during a process. They can also involve temporal sequences—for example, whether a search for alternative solutions in past cases happened before a solution was suggested by an actor. Alternatively, they can be the content of empirical material, such as the speech acts of a particular actor during a meeting.

Following Van Evera (1997), many process tracers have tried to systematize how to evaluate the probative value of this type of evidence, often drawing explicitly on Bayesian logic. The following discussion does not introduce Bayesian logic in any detail (for introductions to this, see Beach & Pedersen, 2019; Bennet & Checkel, 2014; Fairfield & Charman, 2017), noting only that there is a discussion in the literature on whether PT should use more formalized, explicit Bayesian analysis (e.g., Bennett et al., 2022; Fairfield & Charman, 2017) or whether more informal applications of Bayesian reasoning are more useful for PT (e.g., Beach & Pedersen, 2019; Zaks, 2021a, 2021b). Instead, the following discussion presents the underlying Bayesian intuition that not all evidence has the same probative value, and the amount of “updating” is determined by whether one had to find it and whether there are plausible alternative explanations for finding it.

The probative value of evidence is determined by the answers to several questions about what evidence in theory can tell researchers and whether researchers can trust sources (Figure 3). When operationalizing a process theory, the researchers is asking what empirically observable fingerprints they might expect to observe in a case if a given actor and activity took place (or overall process if working with a minimalist process theory). For instance, one part of Winward’s (2021) process theory was that security forces would approach local civilian elites for information (see Table 2). He operationalizes a series of potential observables that could act as evidence, such as “We would also expect to find former elites, or those close to them, who would divulge such soliciting of information when interviewed” (Winward, 2021, p. 562).

Figure 3. Moving from process theory to actual empirical sources.

The probative value of a given expected observable is a function of two factors: whether it had to be found (certainty in Bayesian terms) and, if found, whether there are alternative explanations for finding it (uniqueness in Bayesian terms). Whether evidence has to be found (i.e., certainty) relates to the disconfirmatory power of evidence, whereas whether alternative explanations are plausible or not relates to the confirmatory power of evidence.

Some activities can be expected to leave empirical fingerprints in a case, whereas others might leave few traces. Returning to the Tannenwald (1999) example discussed previously, if the pathway related to personal moral convictions works through actors not even considering what they consider “wrong” actions, this pathway would leave few empirical traces—indeed, the absence of evidence of actors discussing the nuclear option could potentially act as evidence of deep normative convictions keeping options off the table for discussion.

The confirmatory power of hypothesized observables relates to whether, if found, there are alternative explanations for finding the evidence. If the hypothesized observable is found and there are few alternative explanations, this would enable a strong confirming inference to be made, and vice versa. For instance, if one had theorized that a part of a diplomatic negotiation process is “actor A persuades actor B in meeting using arguments,” a hypothesized observable could be that actor B changes how they talk about a policy after the meeting. However, the change might be purely strategic, in which they changed their position due to other reasons than normative arguments (e.g., material factors such as the threat of military action). Another alternative explanation of finding the observable could be that actor B does not change position but instead is only paying lip service to the views of actor A in public. When there are plausible alternative explanations, finding an observable only provides weak or no confirmation.

Hypothesized observables are still not evidence upon which inferences about the presence/absence of the part of the process can be made. In the within-case tracing PT variant, inferences are only able to be made when the actual sources of the observations (or lack thereof) of the propositions are also evaluated. At the empirical level, researchers have to ask whether a particular observation matches what they expected to find and whether they had access (if not found) and, if found, whether they can trust the source. Researchers might only have very untrustworthy sources for observing the hypothesized observable, even though it would have been very confirming if actually found. Often process tracers use interviews with participants who have a stake in the events being studied, and therefore have incentives to provide biased accounts. In this instance, actually found evidence in an interview would not enable strong confirmation because the source could not be trusted. The questions that should be asked here are common source critical questions. It is only through corroboration across multiple sources and types of evidence that evidence can be trusted. As an example of best practices, Winward (2021) includes two appendices that discuss individual sources of evidence and the degree to which they can be trusted.

Interpretive Variants of Process Tracing

Several attempts have been made to develop more interpretivist versions of PT (Guzzini, 2017; Norman, 2016, 2021; Pouliot, 2014). What they share is the contention that to understand how social causal processes work, interpretive hermeneutic methods must also be used that try to tap into intersubjective understandings of social actors and how they make sense of their interactions with other social actors—what is termed “experience-near” evidence in the literature (Danermark et al., 2019, pp. 30, 33; Geertz, 1974).

As examples, Pouliot (2014) argues for adding an extra layer of “meaningfulness” to PT analyses by studying the stock of unspoken assumptions and tacit know-how of practitioners to understand their intentions and beliefs with regard to sets of practices that they perform, through the use of interpretivist discourse, interviewing, or ethnographic observation techniques.

Norman (2016, 2021) suggests that interpretive PT uses what he terms “contrastive” explanations that build on counterfactual controlled comparisons of discourses. These can include before/after comparisons or “normal” events compared with an “abnormal” event that changes the trajectory of developments. He puts forward a three-step analytical procedure for interpretive PT that (Norman, 2021, p. 19)


map change in the broader institutional setting—for example, by asking actors of “normal” actions, and then what constituted the “abnormal” and thus conflict between actors at a later time;


explore how change reconstituted actor self-understandings; and


assess how the changed self-understandings gave rise to particular actions.

Case Selection and Generalizing About Mechanisms

This section reviews the state-of-the-art regarding case selection, followed by a discussion of whether and how mechanistic generalizations can be made.

Selecting Appropriate Cases for Process Tracing

The core case selection principles of PT are generally agreed upon. First, PT is a case-based method, which requires that concepts (causes, context, and outcomes) are theorized in set-theoretical terms (Beach & Pedersen, 2016; Goertz & Mahoney, 2012). This means that concepts are defined by the theoretical attributes that determine whether a given case is in or out of the “set” of the concept—that is, it is a member of the concept or not. For example, if one is interested in tracing a process linking development and democratization, clear definitions of both concepts would be required that enable the scholar to determine whether a given case is a member or not of the concepts.

All PT variants therefore share an interest in positive, “typical” cases in which cause, outcome, and relevant contextual conditions are all present (Table 3) (Beach & Pedersen, 2018; Schneider & Rohlfing, 2013). In negative cases, where the cause and outcome are not present, there is no ‘process’ to trace. When controlled-comparison variants of PT are used, a typical case is compared with a most similar case in which the mechanism (or part) was not present. If the mechanism makes a difference, the most similar case would be a deviant case consistency in which the outcome as a result does not occur. Cases in which both a cause and an outcome are not present are termed “irrelevant” cases (Table 3, quadrant III), in which there is obviously no process to trace because the cause and outcome are not present (Mahoney & Goertz, 2004).

Two types of deviant cases are relevant for PT, but only after one has a good understanding of how processes work in typical cases. These are deviant coverage and deviant consistency cases (Schneider & Rohlfing, 2013). A deviant coverage case is one in which a cause or contextual condition is not present, but the outcome has occurred. In this type of case, one is interested in determining what other cause(s) might have produced the outcome. Here, all three variants of PT are relevant for probing what other cause(s) might be present. Given the focus on detecting new causes, processes are typically kept at a very abstract level.

Alternatively, when a cause is present but one or more contextual conditions that allow the operation of a given process are not present, a theory-building PT case study can shed light on alternative causal pathways between a cause and an outcome.

Deviant consistency case are instances in which a process should have linked to the outcome but the process breaks down. Here, tracing a relatively disaggregated process until it breaks down can shed light on omitted contextual conditions that have to be present for a cause to work and/or other causes that also have to be present for the outcome to be produced (Beach & Pedersen, 2018).

Can Mechanistic Generalizations Be Made, and If So, How?

Irrespective of which variant is used, PT case studies end up saying a lot about a little—or in many applications of PT, only one case. In most interpretivist PT applications, there is little or no ambition to generalize process theories because they are understood to be contextually specific to a specific social setting. In contrast, many other PT scholars have ambitions of build more “general” process theories, understood as causal processes that can be expected to work in a similar manner in other cases.

In the literature, there are several different approaches to how mechanistic generalizations can be made. First, there are approaches that claim that when a typical case is “representative” of the population to which generalizations are intended to be made, generalizations can be made after studying a single case. When there is

causal homogeneity among cases that are of the same type and belong to the same term and causal heterogeneity across different types . . . findings from the study of one, say, typical case, travel to all other typical cases of the same term, but not beyond this term.

A similar approach is to claim that the selected typical case was “least likely” for the causal mechanism, and therefore when it works in the least-likely case, it should work everywhere (George & Bennett, 2005, pp. 109–125; Levy, 2008, p. 12). This is even a stronger form of one-to-many generalization because of the amount of knowledge about the underlying population in terms of how the cause and context interact with other causes and contextual conditions in other cases that is required if we are to claim a given case is ‘least-likely’ in relation to other more ‘typical’ cases in the population.

There are other PT scholars who contend that one-to-many strategies risk making generalizations based on hope instead of actual evidence from other cases (Khosrowi, 2019). The literature on causal processes frequently notes that the ways they unfold in a specific case are sensitive to the context that surrounds them (Falleti & Lynch, 2009; Lindquist & Wellstead, 2019, pp. 31–33). Contextual conditions are sometimes termed “scope” conditions in the literature, but the terms mean the same thing. Contextual conditions can be defined as all “relevant aspects of a setting (analytical, temporal, spatial, or institutional)” in which the analysis is embedded in and which might have an impact on the constitutive parts of a process (Falleti & Lynch, 2009, p. 1152). Even when the same causes and outcome are present, different contextual conditions can create differences in the processes linking them together (Falleti & Lynch, 2009; Steel, 2008). This means that two cases that look causally homogeneous based on sharing similar causal conditions and an outcome might be heterogeneous at the process level because of contextual differences. For different processes, the whole process may be completely different, or only one or several parts may diverge between two cases.

The solution to the problem of causal complexity and mechanisms is not just to lift the level of theoretical abstraction of the process theory. Of course, logically the more abstract a theorized mechanism (lower intension), the more cases can be found in which it is present (higher extension), and vice versa. This means there is an inherent trade-off between disaggregating causal process theories and how far they can potentially travel, illustrated in Figure 4. Very abstract one-liner process theories can in principle be present in many different cases because they tell very little about what people are actually doing in the case. However, even simple process theories operate only within particular contexts. For instance, Haggard and Kaufman (2016) theorize that mass mobilization is a mechanism linking economic inequality and a democratic transition. However, as they detail in their book, mass mobilization should not be expected to be operative in every case in which economic inequality and democratic transitions took place because there are other pathways in other contexts.

Figure 4. Levels of theoretical abstraction and the external validity of mechanistic claims.

A stronger strategy for generalizing mechanistic claims is to adopt a multiple cases approach (Beach et al., 2019, pp. 133–144). Here, an initial PT case study is undertaken to build a theorized process theory at a medium level of abstraction (relatively simple, midrange, or quite detailed). Other PT case studies are then undertaken on other cases that are strategically selected within a population of potential cases in which the process might work based on their scores on conditions that might impact how processes work. These follow-up, generalizing PT case studies can utilize a strategy in which the subsequent PT studies operate with a simpler, more abstract process theory in order to make the analysis more feasible (requires fewer resources). Depending on how many cases are traced, and how contextually sensitive the given process is, process-level generalizations can be made which claim that the process works in at least a roughly analogous manner across a set of cases.

Challenges When Using Process Tracing

Engaging in “good” process tracing (PT) is difficult, but the same can be said of any rigorous social science method. However, potential users of PT face a range of challenges that are relatively unique to PT. First, in contrast to methods such as experimental designs, in PT there is a large variety of different understandings of the method itself and what best practices should be followed. This article has argued that as long as there is alignment between the underlying ontological assumptions and epistemological approach, these differences in PT methods should be viewed as a strength because it allows scholars to choose the variant that best fits the research question and context.

Second, irrespective of which variant is used, PT case studies say a lot about a little—or in many applications of PT, only one case. This can make it difficult to get past peer reviewers, who often want to hear about the “general.”

Finally, scholars engaging in PT face challenges when trying to communicate their findings with scholars from other traditions (Beach & Kaas, 2020). For instance, when working with a disaggregated tracing variant of PT, claims deal with causal linkages between causes and outcomes within a single case (Beach & Kaas, 2020; Clarke et al., 2014). This makes it difficult to communicate with scholars who have been trained to assess causal effects (especially those within the so-called potential outcomes framework) because these individuals might have difficulty accepting that fundamentally different types of claims are being made. For instance, a common review comment is that there is no “variation” in the design, even though “variation” is not relevant to evidencing mechanisms when they are understood as productive linkages. Similar challenges face scholars doing interpretivist PT, with reviewers who think in neo-positivist terms finding it very difficult to understand how an interpretivist PT case study taps into the lifeworld of social actors. Furthermore, the process theories used in interpretivist PT studies often are more fluid and sensitive to specific social contexts, making it difficult to communicate with PT scholars who have the ambition of producing more generalizable process theories that can be used across many cases.

There is not an easy answer to these questions, beyond reiterating a plea for methodological pluralism in which scholars appreciate different approaches to learning about the social world that they all care so much about.


  • Andersen, H. (2012). The case for regularity in mechanistic causal explanation. Synthese, 189, 415–432.
  • Aviles, N. B., & Reed, I. A. (2017). Ratio via machina: Three standards of mechanistic explanation in sociology. Sociological Methods & Research, 46(4), 715–738.
  • Bakke, K. M. (2013). Copying and learning from outsiders? Assessing diffusion from transnational insurgents in the Chechen wars. In J. T. Checkel (Ed.), Transnational dynamics of civil war (pp. 31–62). Cambridge University Press.
  • Beach, D. (2021). Evidential pluralism and evidence of mechanisms in the social sciences. Synthese, 199, 8899–8919.
  • Beach, D., & Kaas, J. G. (2020). The great divides: Incommensurability, the impossibility of mixed-methodology, and what to do about it. International Studies Review, 22(2), 214–235.
  • Beach, D., & Pedersen, R. B. (2016). Casual case studies. University of Michigan Press.
  • Beach, D., & Pedersen, R. B. (2018). Selecting appropriate cases when tracing causal mechanisms. Sociological Methods & Research, 47(4), 837–871.
  • Beach, D., & Pedersen, R. B. (2019). Process tracing methods. University of Michigan Press.
  • Beach, D., & Pedersen, R. B., with Siewert, M. (2019). Case selection and nesting of process tracing. In Process tracing methods (pp. 89–128). University of Michigan Press.
  • Bennett, A., & Checkel, J. (2014). Process tracing: From metaphor to analytic tool. Cambridge University Press.
  • Bennett, A., Charman, A. E., & Fairfield, T. (2022). Understanding Bayesianism: Fundamentals for process tracers. Political Analysis, 30(2), 298–305.
  • Bevir, M., & Blakely, J. (2018). Interpretive social science: An anti-naturalist approach. Oxford University Press.
  • Brady, H. E. (2008). Causation and explanation in social science. In J. M. Box-Steffensmeier, H. E. Brady, & D. Collier (Eds.), The Oxford handbook of political methodology (pp. 217–270). Oxford University Press.
  • Brady, H. E., & Collier, D. (Eds.). (2011). Rethinking social inquiry: Diverse tools shared standards (2nd ed., pp. 161–200). Rowman Littlefield.
  • Cartwright, N. (2011). Predicting “it will work for us”: (Way) beyond statistics. In P. McKay Illari, F. Russo, & J. Williamson (Eds.), Causality in the sciences (pp. 750–768). Oxford University Press.
  • Cartwright, N. (2021). Rigour versus the need for evidential diversity. Synthese, 199(4–5), 13095–13119.
  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the evidence hierarchy. Topoi, 33(2), 339–360.
  • Closa, C., & Palestini, S. (2018). Tutelage andregime survival in regional organizations’ democracy protection: The case of MERCOSUR and UNASUR. World Politics, 70(3), 443–476.
  • Craver, C. F., & Darden, L. (2013). In search of mechanisms. University of Chicago Press.
  • Danermark, B., Ekström, M., & Karlsson, J. C. (2019). Explaining society: Critical realism in the social sciences (2nd ed.). Routledge.
  • Elster, J. (1998). A plea for mechanisms. In P. Hedström & R. Swedberg (Eds.), Social mechanisms (pp. 45–73). Cambridge University Press.
  • Fairfield, T., & Charman, A. E. (2017). Explicit Bayesian analysis for process tracing: Guidelines, opportunities, and caveats. Political Analysis, 25(3), 363–380.
  • Falleti, T. G., & Lynch, J. F. (2009). Context and causal mechanisms in political analysis. Comparative Political Studies, 42, 1143–1166.
  • Fujii, L. A. (2018). Interviewing in social science research: A relational approach. Routledge.
  • Geertz, C. (1974). “From the native’s point of view”: On the nature of anthropological understanding. Bulletin of the American Academy of Arts and Sciences, 28(1), 26–45.
  • George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. MIT Press.
  • Gerring, J. (2010). Causal mechanisms: Yes but . . .. Comparative Political Studies, 43(11), 1499–1526.
  • Goertz, G. (2017). Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton University Press.
  • Goertz, G., & Mahoney, J. (2012). A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton University Press.
  • Guzzini, S. (2017). Militarizing politics, essentializing identities: Interpretivist process tracing and the power of geopolitics. Cooperation and Conflict, 52(3), 423–445.
  • Haggard, S., & Kaufman, R. R. (2016). Dictators and Democrats: Masses, elites, and regime change. Princeton University Press.
  • Hall, P. A. (2003). Aligning ontology and methodology in comparative politics. In J. Mahoney, & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 373–406). Cambridge University Press.
  • Hedström, P., & Ylikoski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49–67.
  • Imai, K., Keele, L., Tingley, D., & Yamamoto, T. (2011). Unpacking the black box of causality: Learning about causal mechanisms from experimental and observational studies. American Political Science Review, 105(4), 765–789.
  • Johais, E., Bayer, M., & Lambach, D. (2020). How do states collapse? Towards a model of causal mechanisms. Global Change, Peace & Security, 32(2), 179–197.
  • Khosrowi, D. (2019). Extrapolation of causal effects—Hopes, assumptions, and the extrapolator’s circle. Journal of Economic Methodology, 26(1), 45–58.
  • King, G., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton University Press.
  • Levy, J. (2008). Case studies: Types, designs, and logics of inference. Conflict Management and Peace Science, 25(1), 1–18.
  • Levy, J. (2015). Counterfactuals, causal inference, and historical analysis. Security Studies, 24(3), 378–402.
  • Lindquist, E., & Wellstead, A. (2019). Policy process research and the causal mechanism movement: Reinvigorating the field? In Capano, Howlett, Ramesh, and Virani (Eds.), Making policies work (pp. 14–38). Elgar.
  • Löblová, O. (2018). When epistemic communities fail: Exploring the mechanism of policy influence. Policy Studies Journal, 46(1), 160–189.
  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25.
  • Mahoney, J. (2015). Process tracing and historical explanation. Security Studies, 24(2), 200–218.
  • Mahoney, J., & Barrenechea, R. (2019). The logic of counterfactual analysis in case-study explanation. British Journal of Sociology, 70(1), 306–338.
  • Mahoney, J., & Goertz, G. (2004). The possibility principle: Choosing negative cases in comparative research. American Political Science Review, 98(4), 653–669.
  • Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. Cambridge University Press.
  • Norman, L. (2016). The mechanisms of institutional conflict in the European Union. Routledge.
  • Norman, L. (2021). Rethinking causal explanation in interpretive international studies. European Journal of International Relations, 27(3), 936–959.
  • O’Mahoney, J. (2017). Making the real: Rhetorical adduction and the Bangladesh Liberation War. International Organization, 71(2), 317–348.
  • Oppermann, K., & Spencer, A. (2016). Telling stories of failure: Narrative constructions of foreign policy fiascos. Journal of European Public Policy, 23(5), 685–701.
  • Pouliot, V. (2014). Practice tracing. In A. Bennett & J. Checkel (Eds.), Process tracing: From metaphor to analytical tool (pp. 237–259). Cambridge University Press.
  • Rohlfing, I., & Zuber, C. (2021). Check your truth conditions! Clarifying the relationship between theories of causation and social science methods for causal inference. Sociological Methods & Research, 50(4), 1623–1659.
  • Runhardt, R. W. (2015). Evidence for causal mechanisms in social science: Recommendations from Woodward’s manipulability theory of causation. Philosophy of Science, 82(5), 1296–1307.
  • Runhardt, R. W. (2021). Evidential pluralism and epistemic reliability in political science: Deciphering contradictions between process tracing methodologies. Philosophy of the Social Sciences, 51(4), 425–442.
  • Sawyer, R. K. (2004). The mechanisms of emergence. Philosophy of the Social Sciences, 34(2), 260–282.
  • Sayer, A. (2000). Realism and social science. SAGE.
  • Schneider, C., & Rohlfing, I. (2013). Combining QCA and process tracing in set-theoretical multi-method research. Sociological Methods and Research, 42(4), 559–597.
  • Schneider, C. Q., & Rohlfing, I. (2016). Case studies nested in fuzzy-set QCA on sufficiency: Formalizing case selection and causal inference. Sociological Methods & Research, 45(3), 526–568.
  • Sewell, W. H. (1992). A theory of structure: Duality, agency, and transformation. American Journal of Sociology, 98(1), 1–29.
  • Steel, D. (2008). Across the boundaries: Extrapolation in biology and social science. Oxford University Press.
  • Tannenwald, N. (1999). The nuclear taboo: The United States and the normative basis of nuclear non-use. International Organization, 53(3), 433–468.
  • Van Evera, S. (1997). Guide to methods for students of political science. Cornell University Press.
  • Widmaier, W. W. (2007). Constructing foreign policy crises: Interpretive leadership in the Cold War and War on Terrorism. International Studies Quarterly, 51(4), 779–794.
  • Winward, M. (2021). Intelligence capacity and mass violence: Evidence from Indonesia. Comparative Political Studies, 54(3–4), 553–584.
  • Zaks, S. (2021a). Updating Bayesian(s): A critical evaluation of Bayesian process tracing. Political Analysis, 29(1), 58–74.


  • 1. Note the terms causal process and causal mechanism are used as synonyms throughout this article.

  • 2. A milder version of monism is to claim that although different understandings of causation are valid, there is one variant that has “most value for the social sciences” (Rohlfing & Zuber, 2021, p. 1626).

  • 3. The level of theoretical abstraction should not be confused with actual empirical evidence. When one lowers the level of theoretical abstraction, one is unpacking how the process works in terms of theorized interactions between social actors. The actual evidence of the operation of a mechanism, or parts thereof, will always be case-specific.