Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Anthropology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Google Scholar Indexing; date: 06 December 2023

Anthropology and Implementation Scienceunlocked

Anthropology and Implementation Scienceunlocked

  • Elissa Z. Faro, Elissa Z. FaroUniversity of Iowa
  • Suzanne Heurtin-RobertsSuzanne Heurtin-RobertsUniversity of Maryland
  •  and Heather Schacht ReisingerHeather Schacht ReisingerUniversity of Iowa

Summary

Basic science and health services research has produced a substantial body of knowledge that has the significant possibility of improving human health and well-being. Much of that knowledge is published, yet read only by other researchers. Alternatively, this research becomes “evidence-based practice” (EBP), that is, knowledge obtained under specific controlled conditions. The world where humans live their everyday lives tends to be complex and “messy.” Thus, these EBPs are frequently ineffective when employed in the scientifically uncontrolled world.

Implementation Science (IS) is a fast-growing field intended to remedy this situation. IS was established to study the most effective strategies to integrate EBPs into public and community health and healthcare delivery. IS asks whether an intervention can be effective in a specific local context; that is, under what conditions and in what contexts can any change-oriented action be effective in the real-world? IS will continue to grow in visibility and significance as governments and funding agencies seek to ensure their investments in research are reaching their intended audiences and having demonstrable impact.

Anthropology has contributed significantly to IS, yet it can contribute so much more. Well-equipped to answer many of the questions posed by IS, anthropology’s theory and methods allow researchers to understand and broker both emic and etic perspectives, as well as represent the richness, fluidity, and complexity of context. Both anthropology and IS recognize the importance of context and locality, are real-world-oriented (vs. controlled research environments), and embrace complexity and nonlinearity. They are both comfortable with the emergent nature of research-produced knowledge and use both qualitative and quantitative methods.

Beyond these congruencies in perspectives and approaches, the rationale for having more anthropology in IS is not only because it is a good fit. Anthropology emphasizes critical analysis of power and unequal power structures; while such phenomena are sometimes included in IS, they are not frequently critiqued. Anthropology can furnish a questioning, critical perspective of the object of study and how it is studied, a perspective that is lacking in much IS work. Indeed, this approach is something that anthropology does best, and is integral to anthropology’s conceptual orientation. IS is one of the many spaces in which anthropologists are demonstrating relevance and having an impact.

Subjects

  • Applied Anthropology

What Is Implementation Science?

Definition(s)

Implementation Science (IS) is a relatively new but rapidly growing field intended to put research findings into practice. IS was established to study the most effective strategies to integrate evidence-based interventions into public and community health and healthcare delivery. IS asks whether an intervention can be delivered effectively in a specific local context, that is, under what conditions and in what contexts can any change-oriented action be effective?

Why Do We Need Implementation Science?

Several definitions of IS are in use, with different fields adopting their own approaches, methods, and tools. One widely cited definition can be found in the introductory editorial to the first issue of the journal Implementation Science:

Implementation research is the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care. This relatively new field includes the study of influences on healthcare professional and organisational behavior.

This definition characterizes much of implementation research, yet it has a narrow focus on healthcare alone and does not include public health.

Other fields define IS broadly to include health research in general (Peters et al. 2013) and policy (Nilsen et al. 2013). This emphasis is reflected in the definition used by the National Cancer Institute, which includes public health and policy:

Implementation science (IS) is the study of methods to promote the adoption and integration of evidence-based practices, interventions, and policies into routine health care and public health settings to improve our impact on population health.

This definition, with slight variations in wording, is widely used throughout the US National Institutes of Health (NIH). These two definitions refer to methods employed to utilize research findings and put evidence into practice to improve human health.

Yet, not all implementation research occurs in health-related areas. Indeed, the field of education is engaged with IS especially in school psychology (Durlak 2015; McHugh and Barlow 2010). Referring to psychology in education, Kelly and Perkins define IS as “the study of the processes and methods involved in the systematic transfer and uptake of evidence-based practices into routine, everyday practice” (Kelly and Perkins 2012, 4). Similarly, Nordstrum and coauthors view IS as a research approach used to enhance the reach, adoption, use, and maintenance of innovations and discoveries in diverse educational contexts (Nordstrum, LeMahieu, and Berrena 2017).

Another field that engages IS is social work, which employs a broad, inclusive definition:

Implementation science is the scientific study of methods that examines the factors, processes, and strategies at multiple levels (e.g., clients, providers, organizations, communities) of a system of care that influence the uptake, use, and ultimately the sustainability of empirically-supported interventions, services and policies into practice in community setting.

Knowledge translation is a closely related field with a slightly different emphasis on translation. Developed and widely used in Canada, it is defined by the Canadian Institutes of Health Research as “a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically-sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system” (Canadian Institutes of Health Research 2020).

All these definitions have several elements in common:

1.

the requirement of science-based methods or strategies; some systematic approach to the task;

2.

putting research findings or evidence to use, into some action or practice, which may be service delivery, a program, or a policy; and

3.

that this use is routinized and generalized within many contexts; the implementation of some “evidence-based practice” (EBP) should result in improvement, whether in physical health, mental health, service provision, education, or policy.

Development of the Field

Research has produced a substantial body of knowledge that can improve human health and well-being. Much of that knowledge is published, yet read only by other researchers. Alternatively, research findings may be thought of as EBPs that are interventions or innovations designed to improve health and healthcare universally. Most of this evidence, however, is knowledge obtained under specific, controlled research conditions designed to prove validity and replicability (e.g., randomized control trials). Yet, the world in which humans live their everyday lives, and where care is delivered and received, tends to be complex and “messy.” Thus when these EBPs are employed in the scientifically uncontrolled world, they are frequently ineffective.

It was generally assumed by the scientific research community that basic clinical research, designed with a focus on clinical and health outcomes but without considering application in real-world settings, would eventually result in successful implementation of EBPs applied in practice (Douglas and Burshnic 2019; Goldstein and Olswang 2017). Known as the “traditional research pipeline” (Westfall, Mold, and Fagnan 2007) (see Figure 1), this process of translating basic clinical research findings into practice was assumed to happen and not well-studied. Establishing an innovation’s effectiveness (through controlled trials of performance under “real-world” vs. controlled research settings) (Singal, Higgins, and Waljee 2014) does not guarantee its uptake into routine usage. Further delaying uptake into everyday practice is the considerable length of time for research findings to be published, such that the findings may no longer be relevant because situations and contexts may have changed by time of publication. Classic studies indicate that it takes 17–20 years to get clinical innovations into practice; moreover, only 14% of clinical innovations make it into general usage (Balas and Boren 2000; Morris, Wooding, and Grant 2011; Mosteller 1981).

Figure 1. A diagram outlining the traditional research pipeline.

The seeds of understanding this gap between what is known to be best practice and what is done in practice in the real world began in the last century. Rogers’s Diffusion of Innovations and similar work in other fields sought to understand the social context of the spread of knowledge and innovations (Bauer and Kirchner 2020; Dearing and Cox 2018; Rogers 2003). As the field of IS has begun to formalize, developing and understanding the processes of uptake by which interventions become routine practice have become a primary concern, with a particular focus on diverse contexts across healthcare, policy, and public health (Bauer et al. 2015). Given the focus of IS on local contextual factors that affect uptake of EBPs, engaging stakeholders is at the core of implementation research. This emphasis also makes IS poised to innovate research that improves equity and reduces disparities; IS can leverage postcolonial, reflexivity, structural violence, policy and governance, and social justice theories “to achieve health equity . . . by a critical theoretical foundation that evaluates structural inequality, power, and reflexivity” (Snell-Rood et al. 2021, 1).

Numerous attributes make IS unique among other types of research in healthcare:

1.

IS differs from clinical research because it explicitly considers context, rather than controlling it (efficacy, or research outcomes under ideal circumstances) or tolerating it (effectiveness, or outcomes in real-world settings);

2.

clinical research contrasts health effects of interventions with those of comparison or control groups, whereas IS seeks to evaluate strategies to improve uptake of an EBP;

3.

IS differs from quality improvement (QI) in that QI usually begins with a specific problem, rather than an EBP, and focuses on one site (e.g., hospital, clinic);

4.

dissemination research focuses on the spread of ideas and technologies using communication and education strategies in contrast to IS’s focus on the strategies to incorporate interventions specifically designed to bring about change in practice in a specified context (Bauer and Kirchner 2020; Dearing and Cox 2018).

Comparing Implementation Science and Anthropology

Why is IS a field in which anthropologists might work and contribute to, given its origins and diverse definitions? Table 1 compares perspectives and terms central to each field. Both fields start from the perspective that studying a particular phenomenon—in this case, implementation of practices that improve health—requires a holistic or multi-level approach to examine a research question comprehensively. Cultural or social anthropology is often known for rich integration of qualitative methods in its ethnographies; from the beginning, ethnographies have included quantitative representations of the social world to examine it holistically. IS also holds that qualitative and quantitative methodologies are needed to examine the settings in which the implementation is being studied and the processes associated with it (Hamilton and Finley 2019; Palinkas 2014; Palinkas et al. 2011).

Table 1. Comparison of Perspectives in Anthropology and Implementation Science

Anthropology

Implementation Science

Holistic

Multilevel approach

Qualitative/quantitative

Qualitative/quantitative

Contextual

Contextual

Local perspective

Stakeholder’s perspective

Real-world oriented

Real-world oriented

Flexible, iterative

Flexible, iterative

Emergent

Emergent

Nonlinear

Nonlinear

Complexity, ambiguity

Complexity, ambiguity

The remaining perspectives in Table 1 relate to each field’s orientation to how the methodologies are enacted in practice, including the focus on context and the perspectives on real-world (vs. controlled research context) experience of local stakeholders (i.e., “a representation and understanding of a researcher or research subject’s human experiences, choices, and options and how those factors influence one’s perception of knowledge”; Boylorn 2008, 490). Anthropology and IS have an orientation that any type of research in the field is going to require flexibility and openness to the emergent and ambiguous because the contexts in which the research is conducted are nonlinear and complex.

For anthropologists, none of these perspectives comes as a surprise, but IS also has one foot in biomedical science, where the goal is often to narrow, simplify, and control the setting and process to make definitive claims. Anthropologists’ understanding of perspectives about real-world contexts and methods allows them to work well with implementation researchers who are also attempting to push the boundaries of biomedical science to understand why their findings are not reaching the clinical office, bedside, or local communities.

Four Essential Questions that Structure Implementation Research

To assist in training anthropologists and others new to IS, we developed a set of questions to orient their learning process and provide a foundation for navigating the field. Four questions—along with their simplified forms—serve as guideposts for understanding the field of IS:

1.

What is the gap between EBP and practice?

What needs to change?

2.

What conceptual model describes how change is likely to occur?

How/why will this change occur?

3.

What implementation strategies will facilitate that change?

What will create the change?

4.

What outcomes should be measured to evaluate whether the change occurred in practice and in clinical outcomes?

What changed?

The first question of “what needs to change” simply starts the orientation, since key to IS is that a change needs to occur and IS is needed to understand the most effective ways of creating that change. However, how and why the change will occur, what will create the change, and documenting the actual change demarcate three areas critical to defining the field: (a) theories, models, and frameworks; (b) implementation strategies; and (c) implementation outcomes.

Theories, Models, and Frameworks in Implementation Science

A theory is an abstract thought or system of ideas that intends to explain something about a topic of interest (Turner and Turner 1978). In the social sciences, theory can be defined broadly as “an ordered set of assertions about a generic behaviour or structure assumed to hold throughout a significantly broad range of specific instances” (Kislov et al. 2019, 104; Weick 1989). Anthropological theory is generally concerned with providing a frame by which to understand interactions which differ across contexts and generations (Ellen 2010). Similarly, in IS, theory can be understood as any proposed connections between meaningful relationships and constructs (variables) or how a mechanism or construct may change the behavior of another construct or outcome (Bauer et al. 2015; Davidoff et al. 2015; Foy et al. 2011). IS theories are generalized theories with broad applicability of commonly used principles (Damschroder 2020). Anthropology also represents a particular worldview or way of understanding that foregrounds cultural systems, a holistic perspective, and understanding beliefs, values, and behaviors from an insider’s perspective.

Table 2 represents one way to clarify the use of ideas that are frequently garbled or misused in IS. It is adapted from Kislov’s (2019) work, which not only argues how to connect and utilize theory in IS explicitly, but also suggests how diverse theories from diverse traditions can address issues like equity/diversity (Cornelissen 2017; Kislov et al. 2019; Patton 2014). IS uses theories, models, and frameworks (TMF) to structure and frame research, which are used so often together that publications just use the abbreviation “TMF” (Nilsen 2015; Rycroft-Malone and Bucknall 2010). TMF share a number of commonalities: they are ways of organizing complex systems, and they provide a structure for how the components of those systems interact. In IS, and more broadly in health services research, the terms theory, model, and framework are often used interchangeably.

Table 2. Grand-Theoretical Traditions and Their Potential Relevance to IS

Perspective

Disciplinary Roots

Central Questions Relevant to Implementation Science

Ethnography

Anthropology

What is the culture of a certain group of people (e.g., an organization) involved in implementation? How does it manifest in the process of implementation?

Critical realism

Philosophy, social sciences, and evaluation

What are plausible explanations for verifiable patterns of implementation?

Constructivism

Sociology

What are the implementation actors’ reported perceptions, explanations, beliefs, and worldviews? What consequences do these have on implementation?

Phenomenology

Philosophy

What is the meaning, structure, and essence of the lived experience of implementation for a certain group of people?

Symbolic interactionism

Social psychology

What common set of symbols and understandings has emerged to give meaning to people’s interactions in the process of implementation?

Semiotics

Linguistics

How do signs (i.e., words and symbols) carry and convey meaning in particular implementation contexts?

Narrative analysis

Social sciences, literary criticism

What do stories of implementation reveal about implementation actors and contexts?

Complexity theory

Theoretical physics, natural sciences

What is the underlying order of any disorderly implementation phenomena?

Critical theory

Political philosophy

How do the experiences of inequality, injustice, and subjugation shape implementation?

Feminist inquiry

Interdisciplinary

How does the lens of gender shape and affect our understandings and actions in the process of implementation?

Source: From Kislov et al. (2019). Borrowed with permission of the publisher.

IS TMF help organize researchers’ and implementers’ thoughts about the intervention and the context in which the intervention is being implemented. They help researchers in formulating research questions to investigate relationships among different factors at different levels of the system that may impact implementation. They also help researchers choose methods to capture the appropriate components of those relationships.

In IS, a theory usually implies some predictive capacity and attempts to explain the causal mechanisms of implementation. A theory may be operationalized within a model (Bauer et al. 2015). Models in IS are most often used to describe and/or guide the process of translating research into practice (Nilsen 2015). Models are a way to organize a complex system simply in order to describe components of interest and identify variables that may mediate or moderate hypothesized changes to that system. Importantly, models encapsulate “theories of change” or “theories of explanation” (Damschroder 2020). Frameworks are often descriptive and provide a broad set of constructs that organize concepts and data without specifying causal relationships. They may also provide a prescriptive series of steps summarizing how implementation ideally should be planned and carried out (Meyers, Durlak, and Wandersman 2012).

Selected Theories, Models, and Frameworks Commonalities

For all TMF in IS, context is critical; context is usually considered the most important factor in whether, how, or why an EBP is implemented successfully. Context is critical in understanding and accounting for variations in study outcomes (Barker 2014; Nilsen and Bernhardsson 2019). Stakeholder engagement is also a core component of TMF since the perspective of stakeholders is central to understanding context. Research has shown that stakeholder engagement is critical in ensuring successful implementation and sustainment by increasing acceptability, efficacy, cultural and contextual sensitivity, and capacity for wider-scale use (Kelly et al. 2000; McKay et al. 2020; Mellins et al. 2014). For example, McKay and team highlight the importance of engaging local communities and governments in their work to scale up and sustain an EBP for families of children with disruptive behaviors in Uganda (McKay et al. 2020). Time, effort, alignment of goals and efforts, and systematic approaches were required to develop and sustain strategies to ensure that youth behavioral health outcomes were improved (McKay et al. 2020).

IS TMF help researchers structure and incorporate stakeholder engagement at every stage of implementation, ranging from assessing and improving the acceptability of innovations to sustaining implemented interventions (Lobb and Colditz 2013; McKay and Paikoff 2012; Salloum et al. 2017). Another key commonality across the implementation of TMF is that they are structured to design research projects for implementation and dissemination from the outset, rather than waiting until the end of the project to consider what might happen with the results or how the intervention might assist the stakeholders (Brownson 2017). Finally, implementation TMF are focused on helping researchers and other stakeholders achieve balance between fidelity to an EBP and adaptation to local settings (Cohen et al. 2008; Escoffery et al. 2018).

Categorization of Theories, Models, and Frameworks

As IS matured into its own field, there has been an explosion of TMF in IS, enough that they have been categorized further based on their intended use. Nilsen conducted a narrative review in 2015 that identified three overarching aims of the use of TMF in IS:

1.

describing and/or guiding the process of translating research into practice;

2.

understanding and/or explaining what influences implementation outcomes; and

3.

evaluating implementation.

Based on these aims, he proposed a categorization of TMF into what are widely used now as their three main categories: process, determinant, and evaluation (Damschroder 2020; Nilsen 2015). Researchers have developed other ways to categorize TMF, such as “time-based” and “component-based,” but Nilsen’s categorization continues to be the most widely used (Villalobos Dintrans et al. 2019).

Process models

Process models are used to describe and guide the process of translating research into practice, providing practical guidance for planning and executing implementation efforts from the development of innovations to dissemination of successful implementation results (Damschroder 2020; Nilsen 2015). These models, designed to structure planning, provide a way to think about the steps or phases of implementation linearly, although they could be used for iterative and ongoing relationships among the steps. Examples of process models include: dynamic sustainability (Chambers, Glasgow, and Stange 2013), the Exploration, Preparation, Implementation, Sustainment (EPIS) model (Moullin et al. 2019), dynamic adaptation (Fleurey et al. 2009), and the quality implementation framework (Meyers, Durlak, and Wandersman 2012). The Iowa Implementation for Sustainability Framework, an updated process framework, focuses on actionable, distinct strategies (e.g., action plan, interprofessional discussion, performance evaluation) organized into a structure that provides guidance on effective and sustainable implementation planning (Cullen et al. 2022).

Determinant Frameworks

Nilsen’s determinant frameworks are those that specify constructs that may influence processes or explain the outcomes of implementations, such as behavior changes in healthcare professionals (e.g., incorporation of clinical decision support [Chauhan et al. 2017], uptake of mHealth (mobile health) interventions [Virtanen et al. 2021] or professional adherence to clinical guidelines [Nilsen 2015]). In a 2020 article designed to help reduce confusion around TMF in IS, Damschroder explains that “determinant frameworks can help to define both dependent and independent variables as well as identify moderators that may affect or confound the relationships” influencing implementation outcomes (Damschroder 2020, 2).

Determinant frameworks often focus on identifying and characterizing the local context to identify barriers and facilitators to implementation (Nilsen and Bernhardsson 2019). For example, the Tailored Implementation for Chronic Diseases (TICD) checklist, which was produced by a scoping review of determinant frameworks, identifies seven domains: guideline factors, individual health professional factors, patient factors, professional interactions, incentives and resources, capacity for organization change, and social political and legal factors (Flottorp et al. 2013). Another determinant framework, the Consolidated Framework for Implementation Research (CFIR) (Damschroder et al. 2009), classifies 39 implementation constructs across five domains (i.e., Inner Setting, Outer Setting, Characteristics of the Intervention, Individuals’ Characteristics, and Process) considered to be influential moderators or mediators of implementation outcomes. For example, researchers in Mali used CFIR to guide their mixed methods data collection and analysis to identify leadership and management capacity factors important to the implementation of performance-based financing at ten district hospitals in the Koulikoro region (Zitti et al. 2019).

CFIR provides a structure by which to assess context within which implementation occurs systematically (Tabak et al. 2012). While CFIR is likely the most widely referenced framework, a systematic review identified seventeen unique determinant frameworks (Nilsen and Bernhardsson 2019). Other frameworks that are commonly used include the Theoretical Domains Framework (Michie et al. 2005), Practical, Robust Implementation and Sustainability Model (PRISM) (Feldstein and Glasgow 2008), and the Promoting Action on Research Implementation in Health Services (PARIHS) framework (Rycroft-Malone 2004).

Evaluation Frameworks

The third major category of TMF identified by Nilsen, and commonly used, involves evaluation frameworks. These frameworks usually provide domains relevant to the implementation of an intervention that examine the process and outcomes of the implementation (Damschroder 2020; Nilsen 2015). One of the most commonly used evaluation frameworks is Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM; Glasgow et al. 1999) which in the two decades following its development was cited by more than 2,800 publications (Glasgow et al. 2019). Its popularity is due, in part, to its simplicity and the structure it provides to applying both qualitative and quantitative methods to understand implementation outcomes (Glasgow et al. 2020). Holtrop and colleagues provide detailed guidance on how qualitative methods can be used to “understand how and why results on various individual RE-AIM dimensions, or patterns of results across dimensions (e.g., high reach and low effectiveness) occur” (Holtrop et al. 2021, 177). Researchers have used RE-AIM to identify and inform evaluation metrics that ensure the implementation of an EBP (i.e., specialized medical‐dental community clinic serving adults with autism and intellectual disabilities) is patient-centered, effective, and sustainable (Lai, Klag, and Shikako-Thomas 2019). RE-AIM, as well as other TMF including CFIR (Allen et al. 2021), Theoretical Domains Framework (Etherington et al. 2020), and Proctor et al.’s measurement framework (Etherington et al. 2020), have been updated and extended to incorporate and promote health equity in the evaluation of implementation (Shelton, Chambers, and Glasgow 2020).

Other well-known and often-used evaluation frameworks include Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation–Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development (PRECEDE–PROCEED; Green and Kreuter 2004) and the framework developed by Proctor and colleagues. Proctor’s framework proposes eight implementation outcomes for potential evaluation: acceptability, adoption (also referred to as uptake), appropriateness, costs, feasibility, fidelity, penetration (i.e., integration of a practice within a specific setting), and sustainability (also referred to as maintenance or institutionalization) (Proctor et al. 2011).

Nilsen’s work categorizing TMF highlighted the need for guidance for researchers on how to choose the TMF that is most appropriate for their implementation work. Online tools can help researchers and implementation teams choose a framework based on their specific context, research question, and intervention. For example, Birken and her team at the University of North Carolina at Chapel Hill developed an online tool Theory, Model, and Framework Comparison and Selection Tool (T-CaST) to assess the utilization of one or more TMF in a particular project (Birken et al. 2018). Another online tool was developed by the University of Washington. It is structured by a schema for organizing and selecting TMF based on three variables: (a) construct flexibility, (b) dissemination and/or implementation activities, and (c) socio-ecological framework (Tabak et al. 2012).

The theoretical traditions noted in Table 2 are represented in anthropological theory. In some ways, anthropology uses theory similarly to IS. Anthropology, and the other social sciences, employ theory to help frame perspectives, research questions, and key concepts, and choose research methods—as does IS. However, anthropology uses mid-range theory or higher to organize thought and research. The complexity and specificity of concepts and processes that characterize IS TMF (e.g., CFIR) are simply not found in anthropology. Some models and frameworks found in IS might reasonably be considered mid-range or program-level theories as do Kislov et al. (2019).

The number and complexity of IS TMF pose a challenge to the implementation researcher in choosing and employing a research approach. As Kislov and colleagues note, perhaps the future use of empirical research to inform IS TMF may lead to a consolidation of IS research approaches (Kislov et al. 2019). Anthropologists might play this role, which could result in higher level theories employed with greater ease and clarity in IS.

Implementation Strategies

What implementation strategies will facilitate change? In other words, what will create a change that will lead to a new practice being adopted? What might a field team do, based on their TMF, that will result in the implementation and health-related outcomes that they are hoping to achieve? Implementation strategies are defined as “approaches or techniques used to enhance the adoption, implementation, sustainment, and scale-up (or spread) of an innovation” (Kirchner et al. 2020, 2; Powell et al. 2015, 2019; Proctor, Powell, and McMillen 2013).

Implementation strategies are similar to other concepts of change in other fields (e.g., QI), but as IS develops as a distinct field, researchers strive to make these strategies standardized and generalizable. Since all implementation research is context-specific, there must be a balance between choosing an EBP to implement a change that is targeted for the specific context but also is still relevant to other interventions in similar contexts. In an introduction to implementation strategies for psychiatry, Kirchner and colleagues explain these efforts: “Thus, as with other sciences, implementation science strives to characterize its variables with sufficient levels of abstraction to support aggregating knowledge obtained through multiple studies” (Kirchner et al. 2020, 1).

One attempt by IS researchers to make implementation strategies more generalizable was in the Expert Recommendations for Implementing Change (ERIC) study (Powell et al. 2015). ERIC cataloged 73 discrete implementation strategies (e.g., create new clinical teams; audit and provide feedback; identify and prepare champions; use capitated payments). IS experts then organized the strategies into nine broad categories using a concept mapping exercise (similar to pile sorting and ranking in cultural domain analysis) to facilitate an understanding of the type of change (Waltz et al. 2015). Proctor and colleagues proposed guidelines for naming, defining, and operationalizing implementation strategies in terms of seven dimensions: actor, the action, action targets, temporality, dose, implementation outcomes addressed, and theoretical justification to standardize descriptions so that they are precise enough to enable measurement and reproducibility (Proctor, Powell, and McMillen 2013).

Implementation research has sought to develop a structured approach to link implementation concepts and strategies to address specific contextual domains, such as those included in CFIR (Waltz et al. 2015, 2019). Waltz and colleagues produced a mapping scheme for barriers to strategies in a publicly available tool: the Implementation Strategy Matching Tool (CFIR-ERIC), based on a survey of implementation experts (Waltz et al. 2019). Qualitative approaches are often used to understand how implementation strategies are developed or chosen. Qualitative data also can identify the mechanisms of change that explain how the strategies affect specific contextual determinants. For example, Springer and colleagues conducted interviews with hospital providers and staff to understand one hospital’s experience selecting and implementing an “audit and feedback” intervention (Springer et al. 2021). Springer and colleagues’ qualitative approach allowed them to understand the choices made by implementers in the development and use of implementation strategies.

Implementation Processes and Outcomes

Distinguishing implementation successes and failures from clinical or health-related outcomes is one of the key features of IS. How an EBP is, or is not, implemented in healthcare and public health settings determines whether or not an EBP or innovation can be effective in a particular context as much as the efficacy of the EBP itself. Proctor and colleagues define implementation outcomes as “the effects of deliberate and purposive actions to implement new treatments, practices, and services” (Proctor et al. 2011, 65). Implementation outcomes track implementation progress and success, implementation processes, and the intermediate outcomes in relation to service or clinical outcomes (Smith and Hasan 2020). They are designed to understand processes (e.g., uptake, acceptability of an EBP) to help explain the clinical or health outcomes of the EBP. If none of the doctors or nurses uses a new evidence-based workflow, then the workflow will not improve their patients’ health. While implementation outcomes can be linear, they also can be conceptualized as tracking different components of the implementation process: the sequence of adoption by a delivery agent, delivery of the innovation with fidelity, reach of the innovation to the intended population, and sustainability of the innovation over time (Etingen et al. 2020; Glasgow et al. 1999; Glasgow and Riley 2013). By looking at the proximal processes of implementation, researchers are positioned to understand context and complexity in addition to outcomes.

How Do We Measure Processes?

When thinking about understanding or “measuring” implementation processes, qualitative and ethnographic methods are particularly useful, including semi-structured interviews, focus groups, and observation. In interviews and focus groups, some questions that might be used to understand stakeholders’ perspectives are: “What are some of the barriers towards implementing this EBP?” and “What are some of the facilitators of implementing this EBP?” An explosion of articles has examined facilitators and barriers related to implementing interventions, typically the first step in understanding the implementation context. For example, a PubMed search for implementation science facilitators or barriers in October 2021 produced 806 results between 2017 and 2021.

What Are Outcome Measures?

Incorporating standardized outcome measures contributes to the knowledge base of theoretically informed implementation research, serving to solidify IS as a distinct and methodologically rigorous field. Implementation outcomes have emerged as the field developed, partly out of the need to provide real-time, iterative data to implementation teams. Evaluation frameworks such as RE-AIM are often used when developing or choosing outcome measures because they provide structure for the analysis of implementation processes and outcomes (Holtrop et al. 2021). Other TMF have incorporated RE-AIM into their overall conceptualization of the implementation process (e.g., PRISM), because implementation outcomes are understood to be critical to all implementation research (Feldstein and Glasgow 2008). Proctor and colleagues provide a general taxonomy of the more “obvious” outcomes:

Acceptability: Perception among implementation stakeholders that a given EBP is agreeable or satisfactory.

Appropriateness: Perceived fit, relevance, or compatibility of the EBP for a given practice setting, provider, or consumer; perceived fit to address problem.

Adoption: Intention, initial decision, or action to try to employ an EBP.

Cost: Cost impact of an implementation effort

Feasibility: Extent to which a new EBP can be successfully used or carried out within a given agency or setting.

Fidelity: Degree to which an EBP was implemented as it was prescribed in the original protocol or intended by the practice developers.

Penetration: Integration of a practice within a service setting and its subsystems.

Sustainability: Extent to which a newly implemented EBP is maintained or institutionalized within a service setting’s ongoing, stable operations (outside the context of a research study) (Proctor et al. 2011).

How Do We Measure Outcomes?

As with implementation processes, implementers looking to measure outcomes often utilize qualitative and mixed methods approaches. The whole spectrum of qualitative and quantitative data collection tools may be involved: questionnaires and surveys, semi-structured interviews, focus group discussions, observation, archival data, medical records, and administrative data. An ethnographic approach is well suited to understanding how groups of people interact (or implement) in real-world settings and are interested in issues like context, stakeholder engagement, research participants’ views, and complex interactions (Gertner et al. 2021).

Mechanisms of Implementation and the Implementation Research Logic Model

Implementation researchers have begun to develop a logic model for implementation research as a way to conceptualize how implementation strategies, processes, and outcomes fit together and inform one another. The Implementation Research Logic Model (IRLM) is “a semi-structured, principle-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects” (Smith, Li, and Rafferty 2020, 85). The developers of the IRLM suggest that it can be used for multiple purposes, from the planning stages for how the project is to be carried out, to clarifying reporting of implementation processes, to understanding the connections between determinants, strategies, mechanisms, and outcomes for their project (Smith, Li, and Rafferty 2020).

Implementation Research Designs

Research designs outline the approach for addressing one’s research questions. IS is often considered a subfield or sister field to health services research (HSR). All research designs in HSR are applicable to IS, whether it is an experimental randomized control trial of different implementation strategies, or an observational “natural experiment” in which researchers examine the impact of a new policy before and after its implementation. Several articles have been published that provide overviews of IS research designs (Brown et al. 2017; Hwang et al. 2020; Miller, Smith, and Pugatch 2020), including IS designs specific to addressing health equity (McNulty et al. 2019). One aspect of IS research design that is particularly relevant to anthropologists is that the designs are often mixed methods, combining qualitative and quantitative methods to examine implementation processes and outcomes (Palinkas et al. 2011).

Hybrid designs involve a framing of research design unique to IS. Curran and colleagues published the seminal paper describing hybrid designs and gave the field common language for the three primary ways to conduct IS research (Curran et al. 2012). These options include Hybrid Type I, Type II, and Type III, and they exist on a spectrum. The best way to understand them is to visualize a spectrum (see Figure 2) with Hybrid Type I on the left, Hybrid Type II in the middle, and Hybrid Type III on the right. Hybrid Type 1 represents a design in which the innovation or intervention is relatively new and untested so the emphasis is on clinical outcomes. At the same time, attention is paid to implementation processes in the event that clinical effectiveness proves ineffective. Hybrid Type II examines both clinical and implementation effectiveness equally. Hybrid Type III is used when the innovation or intervention effectiveness is firmly established, but more research is needed to understand what strategies are most effective in reaching successful implementation. Hybrid designs ensure implementation outcomes are considered from the beginning of the study.

Figure 2. Comparison of hybrid design types along the continuum of clinical effectiveness research and implementation research.

Many have argued that qualitative methods are essential for IS. As Palinkas and colleagues argued (Aarons et al. 2012), qualitative methods allow for greater depth of understanding of implementation successes and failures and for identifying strategies that facilitate implementation. The US National Cancer Institute’s Qualitative Research in Implementation Science working group (2015–2018) outlined seven ways qualitative methods bring value to IS:

1.

eliciting stakeholder perspectives

2.

informing design and implementation

3.

understanding contexts across diverse settings

4.

providing documentation and encouraging reflection on the implementation process

5.

gaining insight into implementation effectiveness

6.

understanding mechanisms of change

7.

contributing to theoretical development (Cohen et al. 2018).

Hamilton and Finley discuss why qualitative research is critical to IS through “discovering and documenting: the context(s) in which implementation occurs; the environment(s) where implementation occurs; the process that occurs during implementation; the effectiveness of implementation strategies; and the relationship(s) between theorized and actual changes” (Hamilton and Finley 2019, 2). In their work, they emphasize the questions of what, how, and why (Hamilton and Finley 2019).

Although there are many different ways to frame the integration of qualitative research into IS research design, we return to Palinkas and colleagues’ use of EPIS to highlight how qualitative methodologies span the spectrum from Exploration, Preparation, Implementation, and Sustainment (Moullin et al. 2019).

Exploration

Exploration is the primary stage in which qualitative methods can be used in IS. In this phase, the needs of the stakeholders are assessed and decisions regarding the implementation of an EBP are made. Rapid ethnographic assessment or ethnographic site visits can be an ideal methodology to make these assessments and begin to explore next steps such as intervention adaptations and context-specific implementation strategies. Simultaneously, stakeholder engagement is initiated through open-ended interviews, focus groups, and review of policies or public-facing documents, along with a general exploration of context surrounding the organization, patient population, or community.

Preparation

The Preparation phase begins when the decision is made to implement an innovation or EBP in the setting. Often the work of anthropologists and other qualitative researchers is utilized in this phase to adapt the EBP to fit the setting and create an implementation plan that addresses likely facilitators and barriers. Qualitative researchers may conduct additional interviews and focus groups with stakeholders to understand their views of the planned EBP and the barriers and facilitators they perceive to implementation in their setting. This data is used to adapt the EBP and create the implementation plan. It can also lay the foundation for rapport building with the stakeholders as a way of demonstrating that the implementation team has listened to and understood their perspectives and worked to address their concerns, critical for ensuring a supportive implementation climate.

Implementation

In the Implementation phase, the EBP is initiated and monitored. The engagement of anthropologists and other qualitative researchers depends heavily on the research plan. The types of designs employed involve evaluation designs, including formative, process, and summative evaluations. Although each type has distinct objectives, qualitative researchers are often part of the data gathering and analysis processes. For formative evaluation, the research design is focused on learning how the implementation is going and what adjustments could be made to improve it. Ethnographic site visits, interviews, focus groups, and observations are ideal methods for this type of design. In particular, rapid ethnographic assessment can be beneficial with its focus on gathering a comprehensive understanding of the process in context within a short, well-defined time frame. It is important to note that formative evaluation can be used to inform adaptations to the EBP or implementation strategies depending on the predetermined research design. For example, if fidelity to the EBP is important, qualitative researchers can observe whether the EBP is being delivered as designed, while providing data and analysis to support adaptations to implementation strategies in an effort to improve outcomes.

Process and summative evaluation designs are used when fidelity to EBP and implementation strategies are central to the research design. With process evaluation, researchers monitor and collect data on how the EBP are being delivered; they also focus on whether and how the implementations strategies are being utilized. Researchers do not share the data and analysis with the team with the intent of making adaptations and changes. Summative evaluation has the same objectives. However, the research team conducts the data gathering at the “end” of the implementation period to take advantage of hindsight to examine what supported implementation and the challenges faced in the process.

A well-defined research plan is critical for ensuring the implementation research team meets its objectives. The implementation team and the research team are often made up of the same team members. Most, if not all, implementation scientists recognize the importance of understanding context to be able to adapt the EBP and/or implementation strategies to fit the context or address the related barriers. Thus, a formative evaluation design is often employed. The role of anthropologists and other qualitative researchers is to understand the context and barriers and offer feedback on the analysis to the team to improve adaptation. Similarly, they could be part of the team that creates the adaptations. However, if fidelity to the EBP and/or implementation strategies is central to the research design, it is important for anthropologists and other qualitative researchers to remain detached from the implementation team and keep some “objective” distance to observe and document the process during (process evaluation) or following (summative evaluation) the implementation.

Sustainment

In the Sustainment phase, the focus is on ensuring that the supports are in place to help EBP continue. Ideally, the phases of EPIS lay the foundation for sustainment, while the qualitative research contributes to this foundation. However, research designs could target a long-term plan for follow-up or a visit to the setting to re-examine any changes. Qualitative methods can be used including ethnographic site visits, interviews, focus groups, document review, and observations.

While EPIS is helpful in delineating individual phases of the implementation research process in which ethnographic and qualitative methods can be effectively applied, the China/US Women’s Health Project demonstrates the strength of using ethnographic methods through the whole process from exploration to preparation to implementation and finally sustainment. The project team conducted an intensive ethnographic study on the implementation of an intervention to promote female condom use among sex workers in four communities in China. The research was a partnership between US and Chinese researchers, the Centers for Disease Control and Prevention (CDC), and provincial-level and local healthcare and public health organizations. The research design included 6 months of formative work to observe and map the local sex worker establishments and the intersecting organizations. This work could be characterized as Exploration and Preparation in the EPIS framework. It laid the groundwork for tailoring implementation to the communities. The intervention was implemented over a 6-month period; pre-/post-intervention surveys were used to document implementation and health outcomes. Finally, the design included a 6-month sustainment phase. Of the several publications that document the study and its outcomes (Liao et al. 2011; Weeks et al. 2010), a 2013 publication by the team (Weeks et al. 2013) demonstrates how the ethnographic data from each stage of the study allowed them to map the multilevel systems impacting sustainment of the intervention and the variations across sites.

An additional research design that does not fit the EPIS phases is studying implementation of a policy as a natural experiment (Théodore et al. 2019) or the use of implementation strategies in clinical settings that are not part of a formally planned study (Goedken et al. 2019). Such research designs take advantage of the implementation of a policy, EBP, or implementation strategy and examine how the people in the setting implement and explore their decisions. Ethnography works well in these situations. In addition, rapid approaches to ethnographic data collection and analysis are increasingly used in IS due to the demands of the research time frame or needs of clinical partners. While many implementation scientists rely on rapid ethnographic assessment (Beebe 2014; Sangaramoorthy and Kroeger 2020), Palinkas and Zatzick (2019) developed the method of Rapid Assessment Procedure-Informed Clinical Ethnography specifically in the context of IS.

Finally, participatory action research and community-based participatory research designs often are combined with qualitative and ethnographic methods to ensure all perspectives are included in IS, often to counter unequal power structures and ensure those excluded from decision-making positions are integrated in the process. Morton Ninomiya and colleagues describe their work to decolonize approaches to alcohol prevention programs among the people of the Sheshatshiu Innu First Nation in Canada (Morton Ninomiya, Hurley, and Penashue 2020). Their work led to exploring interventions to prevent fetal alcohol spectrum disorder and care for children with the condition, preparing interventions designed with Sheshatshiu Innu First Nation and provincial and national practitioners and researchers together. Then implementation shifted to the Sheshatshiu Innu First Nation, which supported the sustainment of the program. In each of these research designs and approaches, ethnography is central to widening the lens on understanding what makes implementation successful.

Role of Anthropologists in Implementation Science

Why We Need More Anthropology in Implementation Science

It is apparent that IS is an important field for solving human problems, and one in which anthropologists should be highly involved given some of the congruences between anthropology and IS. Yet, the rationale for having more anthropology in IS goes beyond the case that there is a good fit. A more important reason is the vital contributions that anthropology can make to IS.

Generally, anthropology can be seen as the holistic study of humanity, past and present in all of its aspects including cultural, social, linguistic, and biological. IS, by contrast, emerges from a Western perspective (Boulton, Sandall, and Sevdalis 2020). Consequently, IS assumptions and values about sociocultural phenomena such as organizations and institutions, including family and social relationships, tend to reflect the dominant cultures of Europe and North America. Although there is a growing movement to apply IS globally (Bertram et al. 2021; Ridde, Perez, and Robert 2020; Shelton et al. 2020), there is generally limited attention to cultural differences, power, and structural barriers.

One of anthropology’s greatest attributes is its ability to raise awareness of the sociocultural world in which people live but do not “see” the cultural patterns that surround them. Anthropology can be especially important in helping IS understand the contexts in which implementation occurs and identifying cultural blind spots in perceptions and understandings (Cohen et al. 2008). Indeed, anthropologists have sought to understand context, remove cultural blinders, and make behaviors, beliefs, attitudes, and other aspects of human life and functioning apparent.

Agar’s concept of the “Professional Stranger” (Agar 1980) is relevant to IS: an ethnographer who encounters a culture or society as an intentional outsider, that is, an intentionally uninformed member of society (Granosik 2011) so as to learn about it from that society’s members. Implementation researchers rarely take this position. Rather, implementation researchers usually assume the role of an expert seeking to characterize a context rather than understand. Anthropology’s use of this inexpert, knowledge-seeking role can contribute greatly to IS understanding of all aspects of context.

Beyond methods and approaches, anthropology attends to power structures and discrepancies in power among stakeholder groups, phenomena that, while sometimes acknowledged in IS, are not frequently critiqued. Anthropologists can provide a questioning, critical perspective of the object of study and how it is studied, a perspective that is lacking in much IS work. Anthropologists can foster awareness of needs for structural, organizational, or policy changes to improve health and healthcare, beyond the implementation of an EBP. They can explore specific power inequities and the organizational, political, or economic structures that normalize these discrepancies. Anthropologists can help the still-developing field of IS to do more than implement EBPs. They can help increase the power of evidence through critical implementation. That is, with a critical eye to power, we can bring unhelpful or unjust power discrepancies to light. Then, interventions could go beyond appropriateness to context and also lead to transformation of context through a critical approach to implementation (Peek et al. 2014).

Implementation researchers typically hold a position of power vis-à-vis the groups with whom they are working. Power differentials are established based on research knowledge and expertise, which is prioritized over local or practice-based knowledge during the implementation process. Anthropologists’ use of reflexivity to examine their perspective, bias, and role in the research process could apply more broadly to IS (Snell-Rood et al. 2021). Reflexivity provides a different lens to address power and inequity in the field.

Anthropology is positioned to answer the questions IS poses, since anthropological theory and methods allow an understanding and brokering of both emic and etic perspectives, as well as an in-depth exploration of the richness, fluidity, and complexity of context. Yet, anthropology is far from the dominant discipline working in IS where psychologists, nurses, and physicians are found in abundance. In fact, IS is an interdisciplinary and public-oriented field. There is great interest at the NIH, the CDC, the Agency for Healthcare Research and Quality (AHRQ), the Patient Centered Outcomes Research Institute (PCORI), and the Department of Veterans Affairs (VA) because IS is oriented towards making research useful for the public.

Examples of Anthropologists Doing Implementation Science

Anthropologists are contributing new methods to support IS both globally and in the United States. While anthropological data collection techniques may not seem innovative to anthropologists, their application in the IS context makes them novel. A few examples from the work of practicing anthropologists are instructive.

Periodic Reflections

Anthropologists Alison Hamilton and Erin Finley developed a qualitative data collection method that leverages the rigor of anthropological qualitative methodology with the research questions and study designs of IS. It is part of their work in the VA’s Quality Enhancement Research Initiative (QUERI) related to “Enhancing Mental and Physical Health of Women through Engagement and Retention” (EMPOWER). Periodic reflections are the structured process of gathering qualitative data from project stakeholders “to ensure consistent documentation of key activities and other phenomena (e.g., challenges, adaptations) occurring over the course of implementation” (Finley et al. 2018, 156). Using an adapted template analysis method, this approach addresses concerns that qualitative research takes too long to be useful to inform implementation. Periodic reflections bring the depth of understanding and close relationships with stakeholders offered by ethnography together with a pragmatic method to provide real-time data to inform implementation.

Rapid Ethnographies

Rapid ethnography, or rapid ethnographic assessment, is another methodological innovation in IS that was developed and popularized in healthcare (Palinkas and Zatzick 2019; Sangaramoorthy and Kroeger 2020; Vindrola-Padros 2021; Vindrola-Padros and Johnson 2020; Vindrola-Padros and Vindrola-Padros 2018). Rapid ethnography is becoming increasingly used in IS research because of its ability to capture the complexity of implementation contexts and the multilevel factors that influence uptake of EBP. By providing a structure for collecting a large amount of data by a large team of potentially inexperienced qualitative researchers, rapid ethnographies are another way that anthropologists have sought to bring the richness of ethnographic data into time frames that are practical for the pace of implementation (Coleman-Phox et al. 2013). As Cecilia Vindrola-Padros, Thurka Sangaramoorthy, and Karen Kroeger have demonstrated, rapid ethnographies are widely used in global contexts, especially in low-resource environments. With its focus on triangulation of multiple data sources and reflexivity, rapid ethnographies are particularly useful for IS with its iterative, immediate data needs and concern with capturing the perspectives of all of the stakeholders in the implementation process.

IS is primarily concerned with healthcare fields and the points of intersection between healthcare and related fields (i.e., psychology and education). Qualitative and mixed methods research is becoming the norm in IS. Consequently, there is a growing need (and subsequent opportunities) for researchers trained in anthropological methods to guide and participate in those projects. For example, a scoping review in 2021 identified 73 IS articles in healthcare academic journals that specifically described their approach as ethnographic (Gertner et al. 2021). It is very hard to discern, either from publications or publicly available project information, if the person doing mixed methods, qualitative, and ethnographic work on a project is an anthropologist or a researcher trained in another field (e.g., public health, sociology, nursing). Perhaps this issue is an indication of anthropologists integrating themselves seamlessly into multidisciplinary healthcare teams at the project or even disciplinary levels. Indeed, two of the authors of this article are faculty in a Division of Internal Medicine.

Anthropologists who work in healthcare-related IS hold varied types of positions and job titles that rarely indicate either anthropological or IS affiliations, from academic/university faculty positions (e.g., in medicine, public health) to various roles at the VA, researchers and project officers at the NIH, and healthcare-related nonprofits, among others. This integration, as well as the invisibility of anthropologists in IS publications, make it difficult to find other anthropologists doing IS. For example, anthropologists in these roles do not include IS on LinkedIn profiles. There are not (yet) listservs or professional groups specific to anthropologists in IS. Most anthropologists in IS would consider themselves “applied” anthropologists. Consequently, the Society for Applied Anthropology and their annual meeting is a space where anthropologists explicitly showcase their work in IS and make connections with colleagues. As many applied anthropologists working in fields outside of their discipline have found, traditional forms of networking are key to finding homes to practice our craft.

Further Reading

Introductory
Frameworks
Strategies
Outcomes
  • Proctor, E., H. Silmere, R. Raghavan, P. Hovmand, G. Aarons, A. Bunger, et al. 2011. “Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda.” Administration and Policy in Mental Health and Mental Health Services Research 38 (2): 65–76.
  • Weiner B. J., C. C. Lewis, C. Stanick, B. J. Powell, C. N. Dorsey, A. S. Clary, et al. 2017. “Psychometric Assessment of Three Newly Developed Implementation Outcome Measures.” Implementation Science 12 (1): 108.

References