Anthropology and Implementation Science
Anthropology and Implementation Science
- Elissa Z. Faro, Elissa Z. FaroUniversity of Iowa
- Suzanne Heurtin-RobertsSuzanne Heurtin-RobertsUniversity of Maryland
- and Heather Schacht ReisingerHeather Schacht ReisingerUniversity of Iowa
Basic science and health services research has produced a substantial body of knowledge that has the significant possibility of improving human health and well-being. Much of that knowledge is published, yet read only by other researchers. Alternatively, this research becomes “evidence-based practice” (EBP), that is, knowledge obtained under specific controlled conditions. The world where humans live their everyday lives tends to be complex and “messy.” Thus, these EBPs are frequently ineffective when employed in the scientifically uncontrolled world.
Implementation Science (IS) is a fast-growing field intended to remedy this situation. IS was established to study the most effective strategies to integrate EBPs into public and community health and healthcare delivery. IS asks whether an intervention can be effective in a specific local context; that is, under what conditions and in what contexts can any change-oriented action be effective in the real-world? IS will continue to grow in visibility and significance as governments and funding agencies seek to ensure their investments in research are reaching their intended audiences and having demonstrable impact.
Anthropology has contributed significantly to IS, yet it can contribute so much more. Well-equipped to answer many of the questions posed by IS, anthropology’s theory and methods allow researchers to understand and broker both emic and etic perspectives, as well as represent the richness, fluidity, and complexity of context. Both anthropology and IS recognize the importance of context and locality, are real-world-oriented (vs. controlled research environments), and embrace complexity and nonlinearity. They are both comfortable with the emergent nature of research-produced knowledge and use both qualitative and quantitative methods.
Beyond these congruencies in perspectives and approaches, the rationale for having more anthropology in IS is not only because it is a good fit. Anthropology emphasizes critical analysis of power and unequal power structures; while such phenomena are sometimes included in IS, they are not frequently critiqued. Anthropology can furnish a questioning, critical perspective of the object of study and how it is studied, a perspective that is lacking in much IS work. Indeed, this approach is something that anthropology does best, and is integral to anthropology’s conceptual orientation. IS is one of the many spaces in which anthropologists are demonstrating relevance and having an impact.
- Applied Anthropology
What Is Implementation Science?
Implementation Science (IS) is a relatively new but rapidly growing field intended to put research findings into practice. IS was established to study the most effective strategies to integrate evidence-based interventions into public and community health and healthcare delivery. IS asks whether an intervention can be delivered effectively in a specific local context, that is, under what conditions and in what contexts can any change-oriented action be effective?
Why Do We Need Implementation Science?
Several definitions of IS are in use, with different fields adopting their own approaches, methods, and tools. One widely cited definition can be found in the introductory editorial to the first issue of the journal Implementation Science:
Implementation research is the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care. This relatively new field includes the study of influences on healthcare professional and organisational behavior.
This definition characterizes much of implementation research, yet it has a narrow focus on healthcare alone and does not include public health.
Other fields define IS broadly to include health research in general (Peters et al. 2013) and policy (Nilsen et al. 2013). This emphasis is reflected in the definition used by the National Cancer Institute, which includes public health and policy:
Implementation science (IS) is the study of methods to promote the adoption and integration of evidence-based practices, interventions, and policies into routine health care and public health settings to improve our impact on population health.
This definition, with slight variations in wording, is widely used throughout the US National Institutes of Health (NIH). These two definitions refer to methods employed to utilize research findings and put evidence into practice to improve human health.
Yet, not all implementation research occurs in health-related areas. Indeed, the field of education is engaged with IS especially in school psychology (Durlak 2015; McHugh and Barlow 2010). Referring to psychology in education, Kelly and Perkins define IS as “the study of the processes and methods involved in the systematic transfer and uptake of evidence-based practices into routine, everyday practice” (Kelly and Perkins 2012, 4). Similarly, Nordstrum and coauthors view IS as a research approach used to enhance the reach, adoption, use, and maintenance of innovations and discoveries in diverse educational contexts (Nordstrum, LeMahieu, and Berrena 2017).
Another field that engages IS is social work, which employs a broad, inclusive definition:
Implementation science is the scientific study of methods that examines the factors, processes, and strategies at multiple levels (e.g., clients, providers, organizations, communities) of a system of care that influence the uptake, use, and ultimately the sustainability of empirically-supported interventions, services and policies into practice in community setting.(Cabassa 2016), 539
Knowledge translation is a closely related field with a slightly different emphasis on translation. Developed and widely used in Canada, it is defined by the Canadian Institutes of Health Research as “a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically-sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system” (Canadian Institutes of Health Research 2020).
All these definitions have several elements in common:
the requirement of science-based methods or strategies; some systematic approach to the task;
putting research findings or evidence to use, into some action or practice, which may be service delivery, a program, or a policy; and
that this use is routinized and generalized within many contexts; the implementation of some “evidence-based practice” (EBP) should result in improvement, whether in physical health, mental health, service provision, education, or policy.
Development of the Field
Research has produced a substantial body of knowledge that can improve human health and well-being. Much of that knowledge is published, yet read only by other researchers. Alternatively, research findings may be thought of as EBPs that are interventions or innovations designed to improve health and healthcare universally. Most of this evidence, however, is knowledge obtained under specific, controlled research conditions designed to prove validity and replicability (e.g., randomized control trials). Yet, the world in which humans live their everyday lives, and where care is delivered and received, tends to be complex and “messy.” Thus when these EBPs are employed in the scientifically uncontrolled world, they are frequently ineffective.
It was generally assumed by the scientific research community that basic clinical research, designed with a focus on clinical and health outcomes but without considering application in real-world settings, would eventually result in successful implementation of EBPs applied in practice (Douglas and Burshnic 2019; Goldstein and Olswang 2017). Known as the “traditional research pipeline” (Westfall, Mold, and Fagnan 2007) (see Figure 1), this process of translating basic clinical research findings into practice was assumed to happen and not well-studied. Establishing an innovation’s effectiveness (through controlled trials of performance under “real-world” vs. controlled research settings) (Singal, Higgins, and Waljee 2014) does not guarantee its uptake into routine usage. Further delaying uptake into everyday practice is the considerable length of time for research findings to be published, such that the findings may no longer be relevant because situations and contexts may have changed by time of publication. Classic studies indicate that it takes 17–20 years to get clinical innovations into practice; moreover, only 14% of clinical innovations make it into general usage (Balas and Boren 2000; Morris, Wooding, and Grant 2011; Mosteller 1981).
The seeds of understanding this gap between what is known to be best practice and what is done in practice in the real world began in the last century. Rogers’s Diffusion of Innovations and similar work in other fields sought to understand the social context of the spread of knowledge and innovations (Bauer and Kirchner 2020; Dearing and Cox 2018; Rogers 2003). As the field of IS has begun to formalize, developing and understanding the processes of uptake by which interventions become routine practice have become a primary concern, with a particular focus on diverse contexts across healthcare, policy, and public health (Bauer et al. 2015). Given the focus of IS on local contextual factors that affect uptake of EBPs, engaging stakeholders is at the core of implementation research. This emphasis also makes IS poised to innovate research that improves equity and reduces disparities; IS can leverage postcolonial, reflexivity, structural violence, policy and governance, and social justice theories “to achieve health equity . . . by a critical theoretical foundation that evaluates structural inequality, power, and reflexivity” (Snell-Rood et al. 2021, 1).
Numerous attributes make IS unique among other types of research in healthcare:
IS differs from clinical research because it explicitly considers context, rather than controlling it (efficacy, or research outcomes under ideal circumstances) or tolerating it (effectiveness, or outcomes in real-world settings);
clinical research contrasts health effects of interventions with those of comparison or control groups, whereas IS seeks to evaluate strategies to improve uptake of an EBP;
IS differs from quality improvement (QI) in that QI usually begins with a specific problem, rather than an EBP, and focuses on one site (e.g., hospital, clinic);
dissemination research focuses on the spread of ideas and technologies using communication and education strategies in contrast to IS’s focus on the strategies to incorporate interventions specifically designed to bring about change in practice in a specified context (Bauer and Kirchner 2020; Dearing and Cox 2018).
Comparing Implementation Science and Anthropology
Why is IS a field in which anthropologists might work and contribute to, given its origins and diverse definitions? Table 1 compares perspectives and terms central to each field. Both fields start from the perspective that studying a particular phenomenon—in this case, implementation of practices that improve health—requires a holistic or multi-level approach to examine a research question comprehensively. Cultural or social anthropology is often known for rich integration of qualitative methods in its ethnographies; from the beginning, ethnographies have included quantitative representations of the social world to examine it holistically. IS also holds that qualitative and quantitative methodologies are needed to examine the settings in which the implementation is being studied and the processes associated with it (Hamilton and Finley 2019; Palinkas 2014; Palinkas et al. 2011).
Table 1. Comparison of Perspectives in Anthropology and Implementation Science
The remaining perspectives in Table 1 relate to each field’s orientation to how the methodologies are enacted in practice, including the focus on context and the perspectives on real-world (vs. controlled research context) experience of local stakeholders (i.e., “a representation and understanding of a researcher or research subject’s human experiences, choices, and options and how those factors influence one’s perception of knowledge”; Boylorn 2008, 490). Anthropology and IS have an orientation that any type of research in the field is going to require flexibility and openness to the emergent and ambiguous because the contexts in which the research is conducted are nonlinear and complex.
For anthropologists, none of these perspectives comes as a surprise, but IS also has one foot in biomedical science, where the goal is often to narrow, simplify, and control the setting and process to make definitive claims. Anthropologists’ understanding of perspectives about real-world contexts and methods allows them to work well with implementation researchers who are also attempting to push the boundaries of biomedical science to understand why their findings are not reaching the clinical office, bedside, or local communities.
Four Essential Questions that Structure Implementation Research
To assist in training anthropologists and others new to IS, we developed a set of questions to orient their learning process and provide a foundation for navigating the field. Four questions—along with their simplified forms—serve as guideposts for understanding the field of IS:
What is the gap between EBP and practice?
What needs to change?
What conceptual model describes how change is likely to occur?
How/why will this change occur?
What implementation strategies will facilitate that change?
What will create the change?
What outcomes should be measured to evaluate whether the change occurred in practice and in clinical outcomes?
The first question of “what needs to change” simply starts the orientation, since key to IS is that a change needs to occur and IS is needed to understand the most effective ways of creating that change. However, how and why the change will occur, what will create the change, and documenting the actual change demarcate three areas critical to defining the field: (a) theories, models, and frameworks; (b) implementation strategies; and (c) implementation outcomes.
Theories, Models, and Frameworks in Implementation Science
A theory is an abstract thought or system of ideas that intends to explain something about a topic of interest (Turner and Turner 1978). In the social sciences, theory can be defined broadly as “an ordered set of assertions about a generic behaviour or structure assumed to hold throughout a significantly broad range of specific instances” (Kislov et al. 2019, 104; Weick 1989). Anthropological theory is generally concerned with providing a frame by which to understand interactions which differ across contexts and generations (Ellen 2010). Similarly, in IS, theory can be understood as any proposed connections between meaningful relationships and constructs (variables) or how a mechanism or construct may change the behavior of another construct or outcome (Bauer et al. 2015; Davidoff et al. 2015; Foy et al. 2011). IS theories are generalized theories with broad applicability of commonly used principles (Damschroder 2020). Anthropology also represents a particular worldview or way of understanding that foregrounds cultural systems, a holistic perspective, and understanding beliefs, values, and behaviors from an insider’s perspective.
Table 2 represents one way to clarify the use of ideas that are frequently garbled or misused in IS. It is adapted from Kislov’s (2019) work, which not only argues how to connect and utilize theory in IS explicitly, but also suggests how diverse theories from diverse traditions can address issues like equity/diversity (Cornelissen 2017; Kislov et al. 2019; Patton 2014). IS uses theories, models, and frameworks (TMF) to structure and frame research, which are used so often together that publications just use the abbreviation “TMF” (Nilsen 2015; Rycroft-Malone and Bucknall 2010). TMF share a number of commonalities: they are ways of organizing complex systems, and they provide a structure for how the components of those systems interact. In IS, and more broadly in health services research, the terms theory, model, and framework are often used interchangeably.
Table 2. Grand-Theoretical Traditions and Their Potential Relevance to IS
Central Questions Relevant to Implementation Science
What is the culture of a certain group of people (e.g., an organization) involved in implementation? How does it manifest in the process of implementation?
Philosophy, social sciences, and evaluation
What are plausible explanations for verifiable patterns of implementation?
What are the implementation actors’ reported perceptions, explanations, beliefs, and worldviews? What consequences do these have on implementation?
What is the meaning, structure, and essence of the lived experience of implementation for a certain group of people?
What common set of symbols and understandings has emerged to give meaning to people’s interactions in the process of implementation?
How do signs (i.e., words and symbols) carry and convey meaning in particular implementation contexts?
Social sciences, literary criticism
What do stories of implementation reveal about implementation actors and contexts?
Theoretical physics, natural sciences
What is the underlying order of any disorderly implementation phenomena?
How do the experiences of inequality, injustice, and subjugation shape implementation?
How does the lens of gender shape and affect our understandings and actions in the process of implementation?
Source: From Kislov et al. (2019). Borrowed with permission of the publisher.
IS TMF help organize researchers’ and implementers’ thoughts about the intervention and the context in which the intervention is being implemented. They help researchers in formulating research questions to investigate relationships among different factors at different levels of the system that may impact implementation. They also help researchers choose methods to capture the appropriate components of those relationships.
In IS, a theory usually implies some predictive capacity and attempts to explain the causal mechanisms of implementation. A theory may be operationalized within a model (Bauer et al. 2015). Models in IS are most often used to describe and/or guide the process of translating research into practice (Nilsen 2015). Models are a way to organize a complex system simply in order to describe components of interest and identify variables that may mediate or moderate hypothesized changes to that system. Importantly, models encapsulate “theories of change” or “theories of explanation” (Damschroder 2020). Frameworks are often descriptive and provide a broad set of constructs that organize concepts and data without specifying causal relationships. They may also provide a prescriptive series of steps summarizing how implementation ideally should be planned and carried out (Meyers, Durlak, and Wandersman 2012).
Selected Theories, Models, and Frameworks Commonalities
For all TMF in IS, context is critical; context is usually considered the most important factor in whether, how, or why an EBP is implemented successfully. Context is critical in understanding and accounting for variations in study outcomes (Barker 2014; Nilsen and Bernhardsson 2019). Stakeholder engagement is also a core component of TMF since the perspective of stakeholders is central to understanding context. Research has shown that stakeholder engagement is critical in ensuring successful implementation and sustainment by increasing acceptability, efficacy, cultural and contextual sensitivity, and capacity for wider-scale use (Kelly et al. 2000; McKay et al. 2020; Mellins et al. 2014). For example, McKay and team highlight the importance of engaging local communities and governments in their work to scale up and sustain an EBP for families of children with disruptive behaviors in Uganda (McKay et al. 2020). Time, effort, alignment of goals and efforts, and systematic approaches were required to develop and sustain strategies to ensure that youth behavioral health outcomes were improved (McKay et al. 2020).
IS TMF help researchers structure and incorporate stakeholder engagement at every stage of implementation, ranging from assessing and improving the acceptability of innovations to sustaining implemented interventions (Lobb and Colditz 2013; McKay and Paikoff 2012; Salloum et al. 2017). Another key commonality across the implementation of TMF is that they are structured to design research projects for implementation and dissemination from the outset, rather than waiting until the end of the project to consider what might happen with the results or how the intervention might assist the stakeholders (Brownson 2017). Finally, implementation TMF are focused on helping researchers and other stakeholders achieve balance between fidelity to an EBP and adaptation to local settings (Cohen et al. 2008; Escoffery et al. 2018).
Categorization of Theories, Models, and Frameworks
As IS matured into its own field, there has been an explosion of TMF in IS, enough that they have been categorized further based on their intended use. Nilsen conducted a narrative review in 2015 that identified three overarching aims of the use of TMF in IS:
describing and/or guiding the process of translating research into practice;
understanding and/or explaining what influences implementation outcomes; and
Based on these aims, he proposed a categorization of TMF into what are widely used now as their three main categories: process, determinant, and evaluation (Damschroder 2020; Nilsen 2015). Researchers have developed other ways to categorize TMF, such as “time-based” and “component-based,” but Nilsen’s categorization continues to be the most widely used (Villalobos Dintrans et al. 2019).
Process models are used to describe and guide the process of translating research into practice, providing practical guidance for planning and executing implementation efforts from the development of innovations to dissemination of successful implementation results (Damschroder 2020; Nilsen 2015). These models, designed to structure planning, provide a way to think about the steps or phases of implementation linearly, although they could be used for iterative and ongoing relationships among the steps. Examples of process models include: dynamic sustainability (Chambers, Glasgow, and Stange 2013), the Exploration, Preparation, Implementation, Sustainment (EPIS) model (Moullin et al. 2019), dynamic adaptation (Fleurey et al. 2009), and the quality implementation framework (Meyers, Durlak, and Wandersman 2012). The Iowa Implementation for Sustainability Framework, an updated process framework, focuses on actionable, distinct strategies (e.g., action plan, interprofessional discussion, performance evaluation) organized into a structure that provides guidance on effective and sustainable implementation planning (Cullen et al. 2022).
Nilsen’s determinant frameworks are those that specify constructs that may influence processes or explain the outcomes of implementations, such as behavior changes in healthcare professionals (e.g., incorporation of clinical decision support [Chauhan et al. 2017], uptake of mHealth (mobile health) interventions [Virtanen et al. 2021] or professional adherence to clinical guidelines [Nilsen 2015]). In a 2020 article designed to help reduce confusion around TMF in IS, Damschroder explains that “determinant frameworks can help to define both dependent and independent variables as well as identify moderators that may affect or confound the relationships” influencing implementation outcomes (Damschroder 2020, 2).
Determinant frameworks often focus on identifying and characterizing the local context to identify barriers and facilitators to implementation (Nilsen and Bernhardsson 2019). For example, the Tailored Implementation for Chronic Diseases (TICD) checklist, which was produced by a scoping review of determinant frameworks, identifies seven domains: guideline factors, individual health professional factors, patient factors, professional interactions, incentives and resources, capacity for organization change, and social political and legal factors (Flottorp et al. 2013). Another determinant framework, the Consolidated Framework for Implementation Research (CFIR) (Damschroder et al. 2009), classifies 39 implementation constructs across five domains (i.e., Inner Setting, Outer Setting, Characteristics of the Intervention, Individuals’ Characteristics, and Process) considered to be influential moderators or mediators of implementation outcomes. For example, researchers in Mali used CFIR to guide their mixed methods data collection and analysis to identify leadership and management capacity factors important to the implementation of performance-based financing at ten district hospitals in the Koulikoro region (Zitti et al. 2019).
CFIR provides a structure by which to assess context within which implementation occurs systematically (Tabak et al. 2012). While CFIR is likely the most widely referenced framework, a systematic review identified seventeen unique determinant frameworks (Nilsen and Bernhardsson 2019). Other frameworks that are commonly used include the Theoretical Domains Framework (Michie et al. 2005), Practical, Robust Implementation and Sustainability Model (PRISM) (Feldstein and Glasgow 2008), and the Promoting Action on Research Implementation in Health Services (PARIHS) framework (Rycroft-Malone 2004).
The third major category of TMF identified by Nilsen, and commonly used, involves evaluation frameworks. These frameworks usually provide domains relevant to the implementation of an intervention that examine the process and outcomes of the implementation (Damschroder 2020; Nilsen 2015). One of the most commonly used evaluation frameworks is Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM; Glasgow et al. 1999) which in the two decades following its development was cited by more than 2,800 publications (Glasgow et al. 2019). Its popularity is due, in part, to its simplicity and the structure it provides to applying both qualitative and quantitative methods to understand implementation outcomes (Glasgow et al. 2020). Holtrop and colleagues provide detailed guidance on how qualitative methods can be used to “understand how and why results on various individual RE-AIM dimensions, or patterns of results across dimensions (e.g., high reach and low effectiveness) occur” (Holtrop et al. 2021, 177). Researchers have used RE-AIM to identify and inform evaluation metrics that ensure the implementation of an EBP (i.e., specialized medical‐dental community clinic serving adults with autism and intellectual disabilities) is patient-centered, effective, and sustainable (Lai, Klag, and Shikako-Thomas 2019). RE-AIM, as well as other TMF including CFIR (Allen et al. 2021), Theoretical Domains Framework (Etherington et al. 2020), and Proctor et al.’s measurement framework (Etherington et al. 2020), have been updated and extended to incorporate and promote health equity in the evaluation of implementation (Shelton, Chambers, and Glasgow 2020).
Other well-known and often-used evaluation frameworks include Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation–Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development (PRECEDE–PROCEED; Green and Kreuter 2004) and the framework developed by Proctor and colleagues. Proctor’s framework proposes eight implementation outcomes for potential evaluation: acceptability, adoption (also referred to as uptake), appropriateness, costs, feasibility, fidelity, penetration (i.e., integration of a practice within a specific setting), and sustainability (also referred to as maintenance or institutionalization) (Proctor et al. 2011).
Nilsen’s work categorizing TMF highlighted the need for guidance for researchers on how to choose the TMF that is most appropriate for their implementation work. Online tools can help researchers and implementation teams choose a framework based on their specific context, research question, and intervention. For example, Birken and her team at the University of North Carolina at Chapel Hill developed an online tool Theory, Model, and Framework Comparison and Selection Tool (T-CaST) to assess the utilization of one or more TMF in a particular project (Birken et al. 2018). Another online tool was developed by the University of Washington. It is structured by a schema for organizing and selecting TMF based on three variables: (a) construct flexibility, (b) dissemination and/or implementation activities, and (c) socio-ecological framework (Tabak et al. 2012).
The theoretical traditions noted in Table 2 are represented in anthropological theory. In some ways, anthropology uses theory similarly to IS. Anthropology, and the other social sciences, employ theory to help frame perspectives, research questions, and key concepts, and choose research methods—as does IS. However, anthropology uses mid-range theory or higher to organize thought and research. The complexity and specificity of concepts and processes that characterize IS TMF (e.g., CFIR) are simply not found in anthropology. Some models and frameworks found in IS might reasonably be considered mid-range or program-level theories as do Kislov et al. (2019).
The number and complexity of IS TMF pose a challenge to the implementation researcher in choosing and employing a research approach. As Kislov and colleagues note, perhaps the future use of empirical research to inform IS TMF may lead to a consolidation of IS research approaches (Kislov et al. 2019). Anthropologists might play this role, which could result in higher level theories employed with greater ease and clarity in IS.
What implementation strategies will facilitate change? In other words, what will create a change that will lead to a new practice being adopted? What might a field team do, based on their TMF, that will result in the implementation and health-related outcomes that they are hoping to achieve? Implementation strategies are defined as “approaches or techniques used to enhance the adoption, implementation, sustainment, and scale-up (or spread) of an innovation” (Kirchner et al. 2020, 2; Powell et al. 2015, 2019; Proctor, Powell, and McMillen 2013).
Implementation strategies are similar to other concepts of change in other fields (e.g., QI), but as IS develops as a distinct field, researchers strive to make these strategies standardized and generalizable. Since all implementation research is context-specific, there must be a balance between choosing an EBP to implement a change that is targeted for the specific context but also is still relevant to other interventions in similar contexts. In an introduction to implementation strategies for psychiatry, Kirchner and colleagues explain these efforts: “Thus, as with other sciences, implementation science strives to characterize its variables with sufficient levels of abstraction to support aggregating knowledge obtained through multiple studies” (Kirchner et al. 2020, 1).
One attempt by IS researchers to make implementation strategies more generalizable was in the Expert Recommendations for Implementing Change (ERIC) study (Powell et al. 2015). ERIC cataloged 73 discrete implementation strategies (e.g., create new clinical teams; audit and provide feedback; identify and prepare champions; use capitated payments). IS experts then organized the strategies into nine broad categories using a concept mapping exercise (similar to pile sorting and ranking in cultural domain analysis) to facilitate an understanding of the type of change (Waltz et al. 2015). Proctor and colleagues proposed guidelines for naming, defining, and operationalizing implementation strategies in terms of seven dimensions: actor, the action, action targets, temporality, dose, implementation outcomes addressed, and theoretical justification to standardize descriptions so that they are precise enough to enable measurement and reproducibility (Proctor, Powell, and McMillen 2013).
Implementation research has sought to develop a structured approach to link implementation concepts and strategies to address specific contextual domains, such as those included in CFIR (Waltz et al. 2015, 2019). Waltz and colleagues produced a mapping scheme for barriers to strategies in a publicly available tool: the Implementation Strategy Matching Tool (CFIR-ERIC), based on a survey of implementation experts (Waltz et al. 2019). Qualitative approaches are often used to understand how implementation strategies are developed or chosen. Qualitative data also can identify the mechanisms of change that explain how the strategies affect specific contextual determinants. For example, Springer and colleagues conducted interviews with hospital providers and staff to understand one hospital’s experience selecting and implementing an “audit and feedback” intervention (Springer et al. 2021). Springer and colleagues’ qualitative approach allowed them to understand the choices made by implementers in the development and use of implementation strategies.
Implementation Processes and Outcomes
Distinguishing implementation successes and failures from clinical or health-related outcomes is one of the key features of IS. How an EBP is, or is not, implemented in healthcare and public health settings determines whether or not an EBP or innovation can be effective in a particular context as much as the efficacy of the EBP itself. Proctor and colleagues define implementation outcomes as “the effects of deliberate and purposive actions to implement new treatments, practices, and services” (Proctor et al. 2011, 65). Implementation outcomes track implementation progress and success, implementation processes, and the intermediate outcomes in relation to service or clinical outcomes (Smith and Hasan 2020). They are designed to understand processes (e.g., uptake, acceptability of an EBP) to help explain the clinical or health outcomes of the EBP. If none of the doctors or nurses uses a new evidence-based workflow, then the workflow will not improve their patients’ health. While implementation outcomes can be linear, they also can be conceptualized as tracking different components of the implementation process: the sequence of adoption by a delivery agent, delivery of the innovation with fidelity, reach of the innovation to the intended population, and sustainability of the innovation over time (Etingen et al. 2020; Glasgow et al. 1999; Glasgow and Riley 2013). By looking at the proximal processes of implementation, researchers are positioned to understand context and complexity in addition to outcomes.
How Do We Measure Processes?
When thinking about understanding or “measuring” implementation processes, qualitative and ethnographic methods are particularly useful, including semi-structured interviews, focus groups, and observation. In interviews and focus groups, some questions that might be used to understand stakeholders’ perspectives are: “What are some of the barriers towards implementing this EBP?” and “What are some of the facilitators of implementing this EBP?” An explosion of articles has examined facilitators and barriers related to implementing interventions, typically the first step in understanding the implementation context. For example, a PubMed search for implementation science facilitators or barriers in October 2021 produced 806 results between 2017 and 2021.
What Are Outcome Measures?
Incorporating standardized outcome measures contributes to the knowledge base of theoretically informed implementation research, serving to solidify IS as a distinct and methodologically rigorous field. Implementation outcomes have emerged as the field developed, partly out of the need to provide real-time, iterative data to implementation teams. Evaluation frameworks such as RE-AIM are often used when developing or choosing outcome measures because they provide structure for the analysis of implementation processes and outcomes (Holtrop et al. 2021). Other TMF have incorporated RE-AIM into their overall conceptualization of the implementation process (e.g., PRISM), because implementation outcomes are understood to be critical to all implementation research (Feldstein and Glasgow 2008). Proctor and colleagues provide a general taxonomy of the more “obvious” outcomes:
Acceptability: Perception among implementation stakeholders that a given EBP is agreeable or satisfactory.
Appropriateness: Perceived fit, relevance, or compatibility of the EBP for a given practice setting, provider, or consumer; perceived fit to address problem.
Adoption: Intention, initial decision, or action to try to employ an EBP.
Cost: Cost impact of an implementation effort
Feasibility: Extent to which a new EBP can be successfully used or carried out within a given agency or setting.
Fidelity: Degree to which an EBP was implemented as it was prescribed in the original protocol or intended by the practice developers.
Penetration: Integration of a practice within a service setting and its subsystems.
Sustainability: Extent to which a newly implemented EBP is maintained or institutionalized within a service setting’s ongoing, stable operations (outside the context of a research study) (Proctor et al. 2011).
How Do We Measure Outcomes?
As with implementation processes, implementers looking to measure outcomes often utilize qualitative and mixed methods approaches. The whole spectrum of qualitative and quantitative data collection tools may be involved: questionnaires and surveys, semi-structured interviews, focus group discussions, observation, archival data, medical records, and administrative data. An ethnographic approach is well suited to understanding how groups of people interact (or implement) in real-world settings and are interested in issues like context, stakeholder engagement, research participants’ views, and complex interactions (Gertner et al. 2021).
Mechanisms of Implementation and the Implementation Research Logic Model
Implementation researchers have begun to develop a logic model for implementation research as a way to conceptualize how implementation strategies, processes, and outcomes fit together and inform one another. The Implementation Research Logic Model (IRLM) is “a semi-structured, principle-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects” (Smith, Li, and Rafferty 2020, 85). The developers of the IRLM suggest that it can be used for multiple purposes, from the planning stages for how the project is to be carried out, to clarifying reporting of implementation processes, to understanding the connections between determinants, strategies, mechanisms, and outcomes for their project (Smith, Li, and Rafferty 2020).
Implementation Research Designs
Research designs outline the approach for addressing one’s research questions. IS is often considered a subfield or sister field to health services research (HSR). All research designs in HSR are applicable to IS, whether it is an experimental randomized control trial of different implementation strategies, or an observational “natural experiment” in which researchers examine the impact of a new policy before and after its implementation. Several articles have been published that provide overviews of IS research designs (Brown et al. 2017; Hwang et al. 2020; Miller, Smith, and Pugatch 2020), including IS designs specific to addressing health equity (McNulty et al. 2019). One aspect of IS research design that is particularly relevant to anthropologists is that the designs are often mixed methods, combining qualitative and quantitative methods to examine implementation processes and outcomes (Palinkas et al. 2011).
Hybrid designs involve a framing of research design unique to IS. Curran and colleagues published the seminal paper describing hybrid designs and gave the field common language for the three primary ways to conduct IS research (Curran et al. 2012). These options include Hybrid Type I, Type II, and Type III, and they exist on a spectrum. The best way to understand them is to visualize a spectrum (see Figure 2) with Hybrid Type I on the left, Hybrid Type II in the middle, and Hybrid Type III on the right. Hybrid Type 1 represents a design in which the innovation or intervention is relatively new and untested so the emphasis is on clinical outcomes. At the same time, attention is paid to implementation processes in the event that clinical effectiveness proves ineffective. Hybrid Type II examines both clinical and implementation effectiveness equally. Hybrid Type III is used when the innovation or intervention effectiveness is firmly established, but more research is needed to understand what strategies are most effective in reaching successful implementation. Hybrid designs ensure implementation outcomes are considered from the beginning of the study.
Many have argued that qualitative methods are essential for IS. As Palinkas and colleagues argued (Aarons et al. 2012), qualitative methods allow for greater depth of understanding of implementation successes and failures and for identifying strategies that facilitate implementation. The US National Cancer Institute’s Qualitative Research in Implementation Science working group (2015–2018) outlined seven ways qualitative methods bring value to IS:
eliciting stakeholder perspectives
informing design and implementation
understanding contexts across diverse settings
providing documentation and encouraging reflection on the implementation process
gaining insight into implementation effectiveness
understanding mechanisms of change
contributing to theoretical development (Cohen et al. 2018).
Hamilton and Finley discuss why qualitative research is critical to IS through “discovering and documenting: the context(s) in which implementation occurs; the environment(s) where implementation occurs; the process that occurs during implementation; the effectiveness of implementation strategies; and the relationship(s) between theorized and actual changes” (Hamilton and Finley 2019, 2). In their work, they emphasize the questions of what, how, and why (Hamilton and Finley 2019).
Although there are many different ways to frame the integration of qualitative research into IS research design, we return to Palinkas and colleagues’ use of EPIS to highlight how qualitative methodologies span the spectrum from Exploration, Preparation, Implementation, and Sustainment (Moullin et al. 2019).
Exploration is the primary stage in which qualitative methods can be used in IS. In this phase, the needs of the stakeholders are assessed and decisions regarding the implementation of an EBP are made. Rapid ethnographic assessment or ethnographic site visits can be an ideal methodology to make these assessments and begin to explore next steps such as intervention adaptations and context-specific implementation strategies. Simultaneously, stakeholder engagement is initiated through open-ended interviews, focus groups, and review of policies or public-facing documents, along with a general exploration of context surrounding the organization, patient population, or community.
The Preparation phase begins when the decision is made to implement an innovation or EBP in the setting. Often the work of anthropologists and other qualitative researchers is utilized in this phase to adapt the EBP to fit the setting and create an implementation plan that addresses likely facilitators and barriers. Qualitative researchers may conduct additional interviews and focus groups with stakeholders to understand their views of the planned EBP and the barriers and facilitators they perceive to implementation in their setting. This data is used to adapt the EBP and create the implementation plan. It can also lay the foundation for rapport building with the stakeholders as a way of demonstrating that the implementation team has listened to and understood their perspectives and worked to address their concerns, critical for ensuring a supportive implementation climate.
In the Implementation phase, the EBP is initiated and monitored. The engagement of anthropologists and other qualitative researchers depends heavily on the research plan. The types of designs employed involve evaluation designs, including formative, process, and summative evaluations. Although each type has distinct objectives, qualitative researchers are often part of the data gathering and analysis processes. For formative evaluation, the research design is focused on learning how the implementation is going and what adjustments could be made to improve it. Ethnographic site visits, interviews, focus groups, and observations are ideal methods for this type of design. In particular, rapid ethnographic assessment can be beneficial with its focus on gathering a comprehensive understanding of the process in context within a short, well-defined time frame. It is important to note that formative evaluation can be used to inform adaptations to the EBP or implementation strategies depending on the predetermined research design. For example, if fidelity to the EBP is important, qualitative researchers can observe whether the EBP is being delivered as designed, while providing data and analysis to support adaptations to implementation strategies in an effort to improve outcomes.
Process and summative evaluation designs are used when fidelity to EBP and implementation strategies are central to the research design. With process evaluation, researchers monitor and collect data on how the EBP are being delivered; they also focus on whether and how the implementations strategies are being utilized. Researchers do not share the data and analysis with the team with the intent of making adaptations and changes. Summative evaluation has the same objectives. However, the research team conducts the data gathering at the “end” of the implementation period to take advantage of hindsight to examine what supported implementation and the challenges faced in the process.
A well-defined research plan is critical for ensuring the implementation research team meets its objectives. The implementation team and the research team are often made up of the same team members. Most, if not all, implementation scientists recognize the importance of understanding context to be able to adapt the EBP and/or implementation strategies to fit the context or address the related barriers. Thus, a formative evaluation design is often employed. The role of anthropologists and other qualitative researchers is to understand the context and barriers and offer feedback on the analysis to the team to improve adaptation. Similarly, they could be part of the team that creates the adaptations. However, if fidelity to the EBP and/or implementation strategies is central to the research design, it is important for anthropologists and other qualitative researchers to remain detached from the implementation team and keep some “objective” distance to observe and document the process during (process evaluation) or following (summative evaluation) the implementation.
In the Sustainment phase, the focus is on ensuring that the supports are in place to help EBP continue. Ideally, the phases of EPIS lay the foundation for sustainment, while the qualitative research contributes to this foundation. However, research designs could target a long-term plan for follow-up or a visit to the setting to re-examine any changes. Qualitative methods can be used including ethnographic site visits, interviews, focus groups, document review, and observations.
While EPIS is helpful in delineating individual phases of the implementation research process in which ethnographic and qualitative methods can be effectively applied, the China/US Women’s Health Project demonstrates the strength of using ethnographic methods through the whole process from exploration to preparation to implementation and finally sustainment. The project team conducted an intensive ethnographic study on the implementation of an intervention to promote female condom use among sex workers in four communities in China. The research was a partnership between US and Chinese researchers, the Centers for Disease Control and Prevention (CDC), and provincial-level and local healthcare and public health organizations. The research design included 6 months of formative work to observe and map the local sex worker establishments and the intersecting organizations. This work could be characterized as Exploration and Preparation in the EPIS framework. It laid the groundwork for tailoring implementation to the communities. The intervention was implemented over a 6-month period; pre-/post-intervention surveys were used to document implementation and health outcomes. Finally, the design included a 6-month sustainment phase. Of the several publications that document the study and its outcomes (Liao et al. 2011; Weeks et al. 2010), a 2013 publication by the team (Weeks et al. 2013) demonstrates how the ethnographic data from each stage of the study allowed them to map the multilevel systems impacting sustainment of the intervention and the variations across sites.
An additional research design that does not fit the EPIS phases is studying implementation of a policy as a natural experiment (Théodore et al. 2019) or the use of implementation strategies in clinical settings that are not part of a formally planned study (Goedken et al. 2019). Such research designs take advantage of the implementation of a policy, EBP, or implementation strategy and examine how the people in the setting implement and explore their decisions. Ethnography works well in these situations. In addition, rapid approaches to ethnographic data collection and analysis are increasingly used in IS due to the demands of the research time frame or needs of clinical partners. While many implementation scientists rely on rapid ethnographic assessment (Beebe 2014; Sangaramoorthy and Kroeger 2020), Palinkas and Zatzick (2019) developed the method of Rapid Assessment Procedure-Informed Clinical Ethnography specifically in the context of IS.
Finally, participatory action research and community-based participatory research designs often are combined with qualitative and ethnographic methods to ensure all perspectives are included in IS, often to counter unequal power structures and ensure those excluded from decision-making positions are integrated in the process. Morton Ninomiya and colleagues describe their work to decolonize approaches to alcohol prevention programs among the people of the Sheshatshiu Innu First Nation in Canada (Morton Ninomiya, Hurley, and Penashue 2020). Their work led to exploring interventions to prevent fetal alcohol spectrum disorder and care for children with the condition, preparing interventions designed with Sheshatshiu Innu First Nation and provincial and national practitioners and researchers together. Then implementation shifted to the Sheshatshiu Innu First Nation, which supported the sustainment of the program. In each of these research designs and approaches, ethnography is central to widening the lens on understanding what makes implementation successful.
Role of Anthropologists in Implementation Science
Why We Need More Anthropology in Implementation Science
It is apparent that IS is an important field for solving human problems, and one in which anthropologists should be highly involved given some of the congruences between anthropology and IS. Yet, the rationale for having more anthropology in IS goes beyond the case that there is a good fit. A more important reason is the vital contributions that anthropology can make to IS.
Generally, anthropology can be seen as the holistic study of humanity, past and present in all of its aspects including cultural, social, linguistic, and biological. IS, by contrast, emerges from a Western perspective (Boulton, Sandall, and Sevdalis 2020). Consequently, IS assumptions and values about sociocultural phenomena such as organizations and institutions, including family and social relationships, tend to reflect the dominant cultures of Europe and North America. Although there is a growing movement to apply IS globally (Bertram et al. 2021; Ridde, Perez, and Robert 2020; Shelton et al. 2020), there is generally limited attention to cultural differences, power, and structural barriers.
One of anthropology’s greatest attributes is its ability to raise awareness of the sociocultural world in which people live but do not “see” the cultural patterns that surround them. Anthropology can be especially important in helping IS understand the contexts in which implementation occurs and identifying cultural blind spots in perceptions and understandings (Cohen et al. 2008). Indeed, anthropologists have sought to understand context, remove cultural blinders, and make behaviors, beliefs, attitudes, and other aspects of human life and functioning apparent.
Agar’s concept of the “Professional Stranger” (Agar 1980) is relevant to IS: an ethnographer who encounters a culture or society as an intentional outsider, that is, an intentionally uninformed member of society (Granosik 2011) so as to learn about it from that society’s members. Implementation researchers rarely take this position. Rather, implementation researchers usually assume the role of an expert seeking to characterize a context rather than understand. Anthropology’s use of this inexpert, knowledge-seeking role can contribute greatly to IS understanding of all aspects of context.
Beyond methods and approaches, anthropology attends to power structures and discrepancies in power among stakeholder groups, phenomena that, while sometimes acknowledged in IS, are not frequently critiqued. Anthropologists can provide a questioning, critical perspective of the object of study and how it is studied, a perspective that is lacking in much IS work. Anthropologists can foster awareness of needs for structural, organizational, or policy changes to improve health and healthcare, beyond the implementation of an EBP. They can explore specific power inequities and the organizational, political, or economic structures that normalize these discrepancies. Anthropologists can help the still-developing field of IS to do more than implement EBPs. They can help increase the power of evidence through critical implementation. That is, with a critical eye to power, we can bring unhelpful or unjust power discrepancies to light. Then, interventions could go beyond appropriateness to context and also lead to transformation of context through a critical approach to implementation (Peek et al. 2014).
Implementation researchers typically hold a position of power vis-à-vis the groups with whom they are working. Power differentials are established based on research knowledge and expertise, which is prioritized over local or practice-based knowledge during the implementation process. Anthropologists’ use of reflexivity to examine their perspective, bias, and role in the research process could apply more broadly to IS (Snell-Rood et al. 2021). Reflexivity provides a different lens to address power and inequity in the field.
Anthropology is positioned to answer the questions IS poses, since anthropological theory and methods allow an understanding and brokering of both emic and etic perspectives, as well as an in-depth exploration of the richness, fluidity, and complexity of context. Yet, anthropology is far from the dominant discipline working in IS where psychologists, nurses, and physicians are found in abundance. In fact, IS is an interdisciplinary and public-oriented field. There is great interest at the NIH, the CDC, the Agency for Healthcare Research and Quality (AHRQ), the Patient Centered Outcomes Research Institute (PCORI), and the Department of Veterans Affairs (VA) because IS is oriented towards making research useful for the public.
Examples of Anthropologists Doing Implementation Science
Anthropologists are contributing new methods to support IS both globally and in the United States. While anthropological data collection techniques may not seem innovative to anthropologists, their application in the IS context makes them novel. A few examples from the work of practicing anthropologists are instructive.
Anthropologists Alison Hamilton and Erin Finley developed a qualitative data collection method that leverages the rigor of anthropological qualitative methodology with the research questions and study designs of IS. It is part of their work in the VA’s Quality Enhancement Research Initiative (QUERI) related to “Enhancing Mental and Physical Health of Women through Engagement and Retention” (EMPOWER). Periodic reflections are the structured process of gathering qualitative data from project stakeholders “to ensure consistent documentation of key activities and other phenomena (e.g., challenges, adaptations) occurring over the course of implementation” (Finley et al. 2018, 156). Using an adapted template analysis method, this approach addresses concerns that qualitative research takes too long to be useful to inform implementation. Periodic reflections bring the depth of understanding and close relationships with stakeholders offered by ethnography together with a pragmatic method to provide real-time data to inform implementation.
Rapid ethnography, or rapid ethnographic assessment, is another methodological innovation in IS that was developed and popularized in healthcare (Palinkas and Zatzick 2019; Sangaramoorthy and Kroeger 2020; Vindrola-Padros 2021; Vindrola-Padros and Johnson 2020; Vindrola-Padros and Vindrola-Padros 2018). Rapid ethnography is becoming increasingly used in IS research because of its ability to capture the complexity of implementation contexts and the multilevel factors that influence uptake of EBP. By providing a structure for collecting a large amount of data by a large team of potentially inexperienced qualitative researchers, rapid ethnographies are another way that anthropologists have sought to bring the richness of ethnographic data into time frames that are practical for the pace of implementation (Coleman-Phox et al. 2013). As Cecilia Vindrola-Padros, Thurka Sangaramoorthy, and Karen Kroeger have demonstrated, rapid ethnographies are widely used in global contexts, especially in low-resource environments. With its focus on triangulation of multiple data sources and reflexivity, rapid ethnographies are particularly useful for IS with its iterative, immediate data needs and concern with capturing the perspectives of all of the stakeholders in the implementation process.
IS is primarily concerned with healthcare fields and the points of intersection between healthcare and related fields (i.e., psychology and education). Qualitative and mixed methods research is becoming the norm in IS. Consequently, there is a growing need (and subsequent opportunities) for researchers trained in anthropological methods to guide and participate in those projects. For example, a scoping review in 2021 identified 73 IS articles in healthcare academic journals that specifically described their approach as ethnographic (Gertner et al. 2021). It is very hard to discern, either from publications or publicly available project information, if the person doing mixed methods, qualitative, and ethnographic work on a project is an anthropologist or a researcher trained in another field (e.g., public health, sociology, nursing). Perhaps this issue is an indication of anthropologists integrating themselves seamlessly into multidisciplinary healthcare teams at the project or even disciplinary levels. Indeed, two of the authors of this article are faculty in a Division of Internal Medicine.
Anthropologists who work in healthcare-related IS hold varied types of positions and job titles that rarely indicate either anthropological or IS affiliations, from academic/university faculty positions (e.g., in medicine, public health) to various roles at the VA, researchers and project officers at the NIH, and healthcare-related nonprofits, among others. This integration, as well as the invisibility of anthropologists in IS publications, make it difficult to find other anthropologists doing IS. For example, anthropologists in these roles do not include IS on LinkedIn profiles. There are not (yet) listservs or professional groups specific to anthropologists in IS. Most anthropologists in IS would consider themselves “applied” anthropologists. Consequently, the Society for Applied Anthropology and their annual meeting is a space where anthropologists explicitly showcase their work in IS and make connections with colleagues. As many applied anthropologists working in fields outside of their discipline have found, traditional forms of networking are key to finding homes to practice our craft.
- Curran, G. M. 2020. “Implementation Science Made Too Simple: A Teaching Tool.” Implementation Science Communications 1 (27): 1–3.
- Kirchner, J. E., J. L. Smith, B. J. Powell, T. J. Waltz, and E. K. Proctor. 2020. “Getting a Clinical Innovation into Practice: An Introduction to Implementation Strategies.” Psychiatry Research 283 (January): 112467.
- Moullin, J. C., K. S. Dickson, N. A. Stadnick, B. Albers, P. Nilsen, S. Broder-Fingert, B. Mukasa, and G. A. Aarons. 2020. “Ten Recommendations for Using Implementation Frameworks in Research and Practice.” Implementation Science Communications 1 (42, April 30): 1–12.
- Smith J. D., D. H. Li, and M. R. Rafferty. 2020. “The Implementation Research Logic Model: A Method for Planning, Executing, Reporting, and Synthesizing Implementation Projects.” Implementation Science 15 (84, September 25): 1–12.
- Aarons, G. A., M. Hurlburt, and S. M. Horwitz. 2011. “Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors.” Administration and Policy in Mental Health and Mental Health Services 38 (1): 4–23.
- Damschroder L. J., D. C. Aron, R. E. Keith, S. R. Kirsh, J. A. Alexander, and J. C. Lowery. 2009. “Fostering Implementation of Health Services Research Findings into Practice: A Consolidated Framework for Advancing Implementation Science.” Implementation Science 4 (50, August 7): 1–5.
- Forman, J., M. Heisler, L. J. Damschroder, E. Kaselitz, and E. A. Kerr. 2017. “Development and Application of the RE-AIM QuEST Mixed Methods Framework for Program Evaluation.” Preventive Medicine Reports 6 (June): 322–328.
- Glasgow R. E., T. M. Vogt, and S. M. Boles. 1999. “Evaluating the Public Health Impact of Health Promotion Interventions: The RE-AIM Framework.” American Journal of Public Health 89 (9, September 1): 1322–1327.
- Miller, C. J., M. L. Barnett, A. A. Baumann, C. A. Gutner, and S. Wiltsey-Stirman. 2021. “The FRAME-IS: A Framework for Documenting Modifications to Implementation Strategies in Healthcare.” Implementation Science 16 (1, April 7): 36.
- Powell, B. J., T. J. Waltz, M. J. Chinman, L. J. Damschroder, J. L. Smith, M. M. Matthieu, et al. 2015. “A Refined Compilation of Implementation Strategies: Results from the Expert Recommendations for Implementing Change (ERIC) Project.” Implementation Science 10 (21, February 12): 1–4.
- Waltz, T. J., B. J. Powell, M. J. Chinman, J. L. Smith, M. M. Matthieu, E. K. Proctor, et al. 2014. “Expert Recommendations for Implementing Change (ERIC): Protocol for a Mixed Methods Study.” Implementation Science 9: 39.
- Proctor, E., H. Silmere, R. Raghavan, P. Hovmand, G. Aarons, A. Bunger, et al. 2011. “Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda.” Administration and Policy in Mental Health and Mental Health Services Research 38 (2): 65–76.
- Weiner B. J., C. C. Lewis, C. Stanick, B. J. Powell, C. N. Dorsey, A. S. Clary, et al. 2017. “Psychometric Assessment of Three Newly Developed Implementation Outcome Measures.” Implementation Science 12 (1): 108.
- Aarons, G. A., D. L. Fettes, D. H. Sommerfeld, and D. H. Palinkas. 2012. “Mixed Methods for Implementation Research: Application to Evidence-Based Practice Implementation and Staff Turnover in Community-Based Organizations Providing Child Welfare Services.” Child Maltreatment 17 (1): 67–79.
- Agar, M. H. 1980. The Professional Stranger: An Informal Introduction to Ethnography. Bingley, UK: Emerald.
- Allen, M., A. Wilhelm, L. E. Ortega, S. Pergament, N. Bates, and B. Cunningham. 2021. “Applying a Race (ism)-Conscious Adaptation of the CFIR Framework to Understand Implementation of a School-Based Equity-Oriented Intervention.” Ethnicity & Disease 31 (suppl.): 375–388.
- Balas, E. A., and S. A. Boren. 2000. “Managing Clinical Knowledge for Health Care Improvement.” Yearbook of Medical Informatics 9 (1): 65–70.
- Bauer, M. S., L. Damschroder, H. Hagedorn, J. Smith, and A. M. Kilbourne. 2015. “An Introduction to Implementation Science for the Non-Specialist.” BMC Psychology 3: 32.
- Bauer, M. S., and J. Kirchner. 2020. “Implementation Science: What Is It and Why Should I Care?” Psychiatry Research 283: 112376.
- Beebe, J. 2014. Rapid Qualitative Inquiry: A Field Guide to Team-Based Assessment. Lanham, MD: Rowman & Littlefield.
- Bertram, R., D. Edwards, T. Engell, S. E. U. Kerns, J. Øvretveit, R. Rojas-Andrade, et al. 2021. “Welcome to Global Implementation Research and Applications[https://doi.org/10.1007/s43477-021-00006-3].” Global Implementation Research and Applications 1 (1): 1–4.
- Birken, S. A., C. L. Rohweder, B. J. Powell, C. M. Shea, J. Scott, J. Leeman, et al. 2018. “T-CaST: An Implementation Theory Comparison and Selection Tool.” Implementation Science 13 (1): 143.
- Boulton, R., J. Sandall, and N. Sevdalis. 2020. “The Cultural Politics of ‘Implementation Science.’” Journal of Medical Humanities 41: 379–394.
- Boylorn, R. M. 2008. “Lived Experience.” In The SAGE Encyclopedia of Qualitative Research Methods. Edited by L. M. Given, 489–490. Thousand Oaks, CA: SAGE.
- Brown, C. H., G. Curran, L. A. Palinkas, G. A. Aarons, K. B. Wells, L. Jones, et al. 2017. “An Overview of Research and Evaluation Designs for Dissemination and Implementation.” Annual Reviews of Public Health 38: 1–22.
- Brownson, R. C. 2017. Dissemination and Implementation Research in Health: Translating Science to Practice. New York: Oxford University Press.
- Cabassa, L. J. 2016. “Implementation Science: Why It Matters for the Future of Social Work.” Journal of Social Work Education 52 (suppl. 1): S38–S50.
- Canadian Institutes of Health Research. 2020. Knowledge Translation.
- Chambers, D. A., R. E. Glasgow, and K. C. Stange. 2013. “The Dynamic Sustainability Framework: Addressing the Paradox of Sustainment amid Ongoing Change.” Implementation Science 8 (1): 117.
- Chauhan, B. F., M. Jeyaraman, A. S. Mann, J. Lys, B. Skidmore, K. M. Sibley, et al. 2017. “Behavior Change Interventions and Policies Influencing Primary Healthcare Professionals’ Practice—An Overview of Reviews.” Implementation Science 12 (1): 1–16.
- Cohen, D., J . Leeman, B. F. Crabtree, D. K. Padgett, L. Damschroder, L. Palinkas, et al. 2018. Qualitative Methods in Implementation Science. Bethesda, MD: National Cancer Institute.
- Cohen, D. J., B. F. Crabtree, R. S. Etz, B. A. Balasubramanian, K. E., Donahue, L. C. Leviton, et al. 2008. “Fidelity versus Flexibility: Translating Evidence-Based Research into Practice.” American Journal of Preventive Medicine 35 (5): S381–S389.
- Coleman-Phox, K., B. A. Laraia, N. Adler, C. Vieten, M. Thomas, and E. Epel. 2013. “Recruitment and Retention of Pregnant Women for a Behavioral Intervention: Lessons from the Maternal Adiposity, Metabolism, and Stress (MAMAS) Study.” Preventing Chronic Disease 10: 120096.
- Cornelissen, J. P. 2017. “Preserving Theoretical Divergence in Management Research: Why the Explanatory Potential of Qualitative Research Should Be Harnessed Rather than Suppressed.” Journal of Management Studies 54 (3): 368–383.
- Cullen, L., K. Hanrahan, S. W. Edmonds, H. S. Reisinger, and M. Wagner. 2022. “Iowa Implementation for Sustainability Framework[https://doi.org/10.1186/s13012-021-01157-5].” Implementation Science 17 (1): 1.
- Curran, G. M., M. Bauer, B. Mittman, J. M. Pyne, and C. Stetler. 2012. “Effectiveness-Implementation Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation Research to Enhance Public Health Impact.” Medical Care 50 (3, March): 217–226.
- Damschroder, L. J. 2020. “Clarity Out of Chaos: Use of Theory in Implementation Research.” Psychiatry Research 283: 112461.
- Damschroder, L. J., D. C. Aron, R. E. Keith, S. R. Kirsh, J. A. Alexander, and J. C. Lowery. 2009. “Fostering Implementation of Health Services Research Findings into Practice: A Consolidated Framework for Advancing Implementation Science.” Implementation Science 4: 50.
- Davidoff, F., M. Dixon-Woods, L. Leviton, and S. Michie. 2015. “Demystifying Theory and Its Use in Improvement.” BMJ Quality & Safety, 24 (3), 228–238.
- Dearing, J. W., and J. G. Cox. 2018. “Diffusion of Innovations Theory, Principles, and Practice.” Health Affairs 37 (2): 183–190.
- Douglas, N. F., and V. L. Burshnic. 2019. “Implementation Science: Tackling the Research to Practice Gap in Communication Sciences and Disorders.” Perspectives of the ASHA Special Interest Groups 4 (1): 3–7.
- Durlak, J. A. 2015. “What Everyone Should Know about Implementation.” In Handbook of Social and Emotional Learning: Research and Practice. Edited by J. A. Durlak, C. E. Domitrovich, R. P. Weissberg, and T. P. Gullotta, 395–405. New York: Guilford Press.
- Eccles, M. P., and B. S. Mittman. 2006. “Welcome to Implementation Science.” Implementation Science 1 (1): 1.
- Edwards, N. E., and P. M. Barker. 2014. “The Importance of Context in Implementation Research.” Journal of Acquired Immune Deficiency Syndrome 67: S157–S162.
- Ellen, R. 2010. “Theories in Anthropology and ‘an Anthropological Theory.’” Journal of the Royal Anthropological Institute 16 (2): 387–404.
- Escoffery, C., E. Lebow-Skelley, R. Haardoerfer, E. Boing, H. Udelson, R. Wood, et al. 2018. “A Systematic Review of Adaptations of Evidence-Based Public Health Interventions Globally.” Implementation Science 13 (1): 1–21.
- Etherington, N., I. B. Rodrigues, L. Giangregorio, I. D. Graham, A. M. Hoens, D. Kasperavicius, et al. 2020. “Applying an Intersectionality Lens to the Theoretical Domains Framework: A Tool for Thinking about How Intersecting Social Identities and Structures of Power Influence Behaviour.” BMC Medical Research Methodology 20 (1): 1–13.
- Etingen, B., J. Patrianakos, M. Wirth, T. P. Hogan, B. M. Smith, E. Tarlov, et al. 2020. “TeleWound Practice within the Veterans Health Administration: Protocol for a Mixed Methods Program Evaluation[https://doi.org/10.2196/20139].” JMIR Research Protocols 9 (7): e20139.
- Feldstein, A. C., and R. E. Glasgow. 2008. “A Practical, Robust Implementation and Sustainability Model (PRISM) for Integrating Research Findings into Practice.” The Joint Commission Journal on Quality and Patient Safety 34 (4): 228–243.
- Finley, E. P., A. K. Huynh, M. M. Farmer, B. Bean-Mayberry, T. Moin, S. M. Oishi, et al. 2018. “Periodic Reflections: A Method of guided discussions for Documenting Implementation Phenomena.” BMC Medical Research Methodology 18 (1): 153.
- Fleurey, F., V. Dehlen, N. Bencomo, B. Morin, and J.-M. Jézéquel. 2009. “Modeling and Validating Dynamic Adaptation.” In Models in Software Engineering. Edited by M. R. V. Chaudron, 97–108. Berlin, Heidelberg: Springer.
- Flottorp, S. A., A. D. Oxman, J. Krause, N. R. Musila, M. Wensing, M. Godycki-Cwirko, et al. 2013. “A Checklist for Identifying Determinants of Practice: A Systematic Review and Synthesis of Frameworks and Taxonomies of Factors that Prevent or Enable Improvements in Healthcare Professional Practice.” Implementation Science 8 (1): 1–11.
- Foy, R., J. Ovretveit, P. G. Shekelle, P. J. Pronovost, S. L. Taylor, S. Dy, et al. 2011. “The Role of Theory in Research to Develop and Evaluate the Implementation of Patient Safety Practices.” BMJ Quality & Safety 20 (5): 453–459.
- Gertner, A. K., J. Franklin, I. Roth, G. H. Cruden, A. D. Haley, E. P. Finley, et al. 2021. “A Scoping Review of the Use of Ethnographic Approaches in Implementation Research and Recommendations for Reporting.” Implementation Research and Practice 2: 1–13.
- Glasgow, R. E., C. Battaglia, M. McCreight, R. A. Ayele, and B. A. Rabin. 2020. “Making Implementation Science More Rapid: Use of the RE-AIM Framework for Mid-Course Adaptations across Five Health Services Research Projects in the Veterans Health Administration.” Frontiers in Public Health 8: 194.
- Glasgow, R. E., S. M. Harden, B. Gaglio, B. Rabin, M. L. Smith, G. C. Porter, et al. 2019. “RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice with a 20-Year Review.” Frontiers in Public Health 7: 64.
- Glasgow, R. E., and W. T. Riley. 2013. “Pragmatic Measures: What They Are and Why We Need Them.” American Journal of Preventive Medicine 45 (2): 237–243.
- Glasgow, R. E., T. M. Vogt, and S. M. Boles. 1999. “Evaluating the Public Health Impact of Health Promotion Interventions: The RE-AIM Framework.” American Journal of Public Health 89 (9).
- Goedken, C. C., D. J. Livorsi, M. Sauder, M. W. Vander Weg, E. E. Chasco, N.-C. Chang, et al. 2019. “‘The Role as a Champion Is to Not Only Monitor but to Speak Out and to Educate’: The Contradictory Roles of Hand Hygiene Champions.” Implementation Science 14 (1): 1–11.
- Goldstein, H., and L. Olswang. 2017. “Is There a Science to Facilitate Implementation of Evidence-Based Practices and Programs?[https://doi.org/10.1080/17489539.2017.1416768]” Evidence-Based Communication Assessment and Intervention 11 (3–4): 55–60.
- Granosik, M. 2011. “The Third Party and the (Professional) Stranger: Personal and Interactional Approach to Participative Research.” Pensee plurielle (3): 41–48.
- Green, L. W., and M. Kreuter. 2004. Health Program Planning: An Educational and Ecological Approach. New York: McGraw-Hill.
- Hamilton, A. B., and E. P. Finley. 2019. “Qualitative Methods in Implementation Research: An Introduction.” Psychiatry Research 280: 112516.
- Holtrop, J. S., P. A. Estabrooks, B. Gaglio, S. M. Harden, R. S. Kessler, D. K. King, et al. 2021. “Understanding and Applying the RE-AIM Framework: Clarifications and Resources.” Journal of Clinical and Translational Science 5 (1): E126.
- Hwang, S., S. A. Birken, C. L. Melvin, C. L. Rohweder, and J. D. Smith. 2020. “Designs and Methods for Implementation Research: Advancing the Mission of the CTSA Program.” Journal of Clinical and Translational Science 4 (3): 159–167.
- Kelly, B., and D. F. Perkins. 2012. Handbook of Implementation Science for Psychology in Education. Cambridge: Cambridge University Press.
- Kelly, J. A., T. G. Heckman, L. Y. Stevenson, and P. N. Williams. 2000. “Transfer of Research-Based HIV Prevention Interventions to Community Service Providers: Fidelity and Adaptation.” AIDS Education and Prevention 12: 87.
- Kirchner, J. E., J. L. Smith, B. J. Powell, T. J. Waltz, and E. K. Proctor. 2020. “Getting a Clinical Innovation into Practice: An Introduction to Implementation Strategies.” Psychiatry Research 283: 112467.
- Kislov, R., C. Pope, G. P. Martin, and P. M. Wilson. 2019. “Harnessing the Power of Theorising in Implementation Science.” Implementation Science 14 (1): 103.
- Lai, J., M. Klag, and K. Shikako-Thomas. 2019. “Designing a Program Evaluation for a Medical-Dental Service for Adults with Autism and Intellectual Disabilities Using the RE-AIM Framework.” Learning Health Systems 3 (3): e10192.
- Liao, S., M. R. Weeks, Y. Wang, F. Li, J. Jiang, J. Li, et al. 2011. “Female Condom Use in the Rural Sex Industry in China: Analysis of Users and Non-Users at Post-Intervention Surveys.” AIDS Care 23 (suppl. 1): 66–74.
- Lobb, R., and G. A. Colditz. 2013. “Implementation Science and Its Application to Population Health.” Annual Review of Public Health 34: 235–251.
- McHugh, R. K., and D. H. Barlow. 2010. “The Dissemination and Implementation of Evidence-Based Psychological Treatments: A Review of Current Efforts.” American Psychologist 65 (2): 73.
- McKay, M. M., and R. L. Paikoff. 2012. Community Collaborative Partnerships: The Foundation for HIV Prevention Research Efforts. New York: Routledge.
- McKay, M. M., O. Sensoy Bahar, and F. M. Ssewamala. 2020. “Implementation Science in Global Health Settings: Collaborating with Governmental & Community Partners in Uganda.” Psychiatry Research 283: 112585.
- McNulty, M., J. D. Smith, J. Villamar, I. Burnett-Zeigler, W. Vermeer, N. Benbow, et al. 2019. “Implementation Research Methodologies for Achieving Scientific Equity and Health Equity.” Ethnicity & Disease 29 (suppl. 1): 83–92.
- Mellins, C. A., D. Nestadt, A. Bhana, I. Petersen, E. J. Abrams, S. Alicea, et al. 2014. “Adapting Evidence-Based Interventions to Meet the Needs of Adolescents Growing Up with HIV in South Africa: The VUKA Case Example. Global Social Welfare 1 (3): 97–110.
- Meyers, D. C., J. A. Durlak, and A. Wandersman. 2012. “The Quality Implementation Framework: A Synthesis of Critical Steps in the Implementation Process. American Journal of Community Psychology 50 (3–4): 462–480.
- Michie, S., M. Johnston, C. Abraham, R. Lawton, D. Parker, and A. Walker. 2005. “Making Psychological Theory Useful for Implementing Evidence Based Practice: A Consensus Approach.” BMJ Quality & Safety 14 (1): 26–33.
- Miller, C. J., S. N. Smith, and M. Pugatch. 2020. “Experimental and Quasi-Experimental Designs in Implementation Research.” Psychiatry Research 283: 112452.
- Morris, Z. S., S. Wooding, and J. Grant. 2011. “The Answer is 17 Years, What Is the Question: Understanding Time Lags in Translational Research.” Journal of the Royal Society of Medicine 104 (12): 510–520.
- Morton Ninomiya, M. E., N. Hurley, and J. Penashue. 2020. “A Decolonizing Method of Inquiry: Using Institutional Ethnography to Facilitate Community-Based Research and Knowledge Translation.” Critical Public Health 30 (2): 220–231.
- Mosteller, F. 1981. Innovation and Evaluation. Science 211 (4485): 881–886.
- Moullin, J. C., K. S. Dickson, N. A. Stadnick, B. Rabin, and G. A. Aarons, G. A. 2019. “Systematic Review of the Exploration, Preparation, Implementation, Sustainment (EPIS) Framework.” Implementation Science 14 (1): 1–16.
- National Cancer Institute, Division of Cancer Control and Population Sciences. 2020. About Implementation Science.
- Nilsen, P. 2015. “Making Sense of Implementation Theories, Models and Frameworks.” Implementation Science 10: 53.
- Nilsen, P., and S. Bernhardsson. 2019. “Context Matters in Implementation Science: A Scoping Review of Determinant Frameworks that Describe Contextual Determinants for Implementation Outcomes.” BMC Health Services Research 19 (1): 189.
- Nilsen, P., C. Ståhl, K. Roback, and P. Cairney. 2013. “Never the Twain Shall Meet?—A Comparison of Implementation Science and Policy Implementation Research.” Implementation Science 8 (1): 1–12.
- Nordstrum, L. E., P. G. LeMahieu, and E. Berrena. 2017. “Implementation Science Understanding and Finding Solutions to Variation in Program Implementation.” Quality Assurance in Education 25 (1): 58–73.
- Palinkas, L. A. 2014. “Qualitative and Mixed Methods in Mental Health Services and Implementation Research.” Journal of Clinical Child and Adolescent Psychology 43 (6): 851–861.
- Palinkas, L. A., G. A. Aarons, S. Horwitz, P. Chamberlain, M. Hurlburt, and J. Landsverk. 2011. “Mixed Method Designs in Implementation Research.” Administration and Policy in Mental Health and Mental Health Services Research 38 (1): 44–53.
- Palinkas, L. A., and D. Zatzick. 2019. “Rapid Assessment Procedure Informed Clinical Ethnography (RAPICE) in Pragmatic Clinical Trials of Mental Health Services Implementation: Methods and Applied Case Study. Administration and Policy in Mental Health and Mental Health Services Research 46 (2): 255–270.
- Patton, M. Q. 2014. Qualitative Research and Evaluation Methods: Integrating Theory and Practice. 4th ed. Thousand Oaks, CA: SAGE.
- Peek, C., R. E. Glasgow, K. C. Stange, L. M. Klesges, E. P. Purcell, and R. S. Kessler. 2014. “The 5 R’s: An Emerging Bold Standard for Conducting Relevant Research in a Changing World.” The Annals of Family Medicine 12 (5): 447–455.
- Peters, D. H., T. Adam, O. Alonge, I. A. Agyepong, and N. Tran. 2013. “Implementation Research: What It Is and How to Do It[https://doi.org/10.1136/bmj.f6753].” BMJ 347: 1–7.
- Powell, B. J., M. E. Fernandez, N. J. Williams, G. A. Aarons, R. S. Beidas, C. C. Lewis, et al. 2019. “Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda.” Frontiers in Public Health 3: 1–7
- Powell, B. J., T. J. Waltz, M. J. Chinman, L. J. Damschroder, J. L. Smith, M. M. Matthieu, et al. 2015. “A Refined Compilation of Implementation Strategies: Results from the Expert Recommendations for Implementing Change (ERIC) Project.” Implementation Science 10 (21).
- Proctor, E., H. Silmere, R. Raghavan, P. Hovmand, G. Aarons, A. Bunger, et al. 2011. “Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda.” Administration and Policy in Mental Health and Mental Health Services Research 38 (2): 65–76.
- Proctor, E. K., B. J. Powell, and J. C. McMillen. 2013. “Implementation Strategies: Recommendations for Specifying and Reporting.” Implementation Science 8 (1): 1–11.
- Ridde, V., D. Perez, and E. Robert. 2020. “Using Implementation Science Theories and Frameworks in Global Health.” BMJ Global Health 5 (4): e002269.
- Rogers, E. M. 2003. Diffusion of Innovations. 5th ed. New York: Free Press.
- Rycroft-Malone, J. 2004. “The PARIHS Framework—A Framework for Guiding the Implementation of Evidence-Based Practice.” Journal of Nursing Care Quality 19 (4): 297–304.
- Rycroft-Malone, J., and T. Bucknall. 2010. “Theory, frameworks, and models.” In Models and frameworks for implementing evidence-based practice: Linking evidence to action. Edited by J. Rycroft-Malone and T. Bucknall, 23. Oxford, UK: John Wiley & Sons.
- Salloum, R. G., E. A. Shenkman, J. J. Louviere, and D. A. Chambers. 2017. “Application of Discrete Choice Experiments to Enhance Stakeholder Engagement as a Strategy for Advancing Implementation: A Systematic Review.” Implementation Science 12 (1): 140.
- Sangaramoorthy, T., and K. A. Kroeger. 2020. Rapid Ethnographic Assessments: A Practical Approach and Toolkit for Collaborative Community Research. London: Routledge.
- Shelton, R. C., D. A. Chambers, and R. E. Glasgow. 2020. “An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity over Time.” Frontiers in Public Health 8 (134): 1–8.
- Shelton, R. C., M. Lee, L. E. Brotzman, L. Wolfenden, N. Nathan, and M. L. Wainberg. 2020. “What Is Dissemination and Implementation Science?: An Introduction and Opportunities to Advance Behavioral Medicine and Public Health Globally.” International Journal of Behavioral Medicine 27 (1): 3–20.
- Singal, A. G., P. D. R. Higgins, and A. K. Waljee. 2014. “A Primer on Effectiveness and Efficacy Trials.” Clinical and Translational Gastroenterology 5 (1): e45.
- Smith, J. D., and M. Hasan. 2020. “Quantitative Approaches for the Evaluation of Implementation Research Studies.” Psychiatry Research 283 (January): 112521.
- Smith, J. D., D. H. Li, and M. R. Rafferty. 2020. “The Implementation Research Logic Model: A Method for Planning, Executing, Reporting, and Synthesizing Implementation Projects.” Implementation Science 15 (1): 1–12.
- Snell-Rood, C., E. T. Jaramillo, A. B. Hamilton, S. E. Raskin, F. M. Nicosia, and C. Willging. 2021. “Advancing Health Equity through a Theoretically Critical Implementation Science.” Translational Behavioral Medicine 11 (8, August): 1617–1625.
- Springer, M. V., A. E. Sales, N. Islam, A. C. McBride, Z. Landis-Lewis, M. Tupper, et al. 2021. “A Step toward Understanding the Mechanism of Action of Audit and Feedback: A Qualitative Study of Implementation Strategies.” Implementation Science 16 (1): 35.
- Tabak, R. G., E. C. Khoong, D. A. Chambers, and R. C. Brownson. 2012. “Bridging Research and Practice: Models for Dissemination and Implementation Research.” American Journal of Preventive Medicine 43 (3): 337–350.
- Théodore, F. L., A. Bonvecchio Arenas, A. García-Guerra, I. B. García, R. Alvarado, C. J. Rawlinson, et al. 2019. “Sociocultural Influences on Poor Nutrition and Program Utilization of Mexico’s Conditional Cash Transfer Program. The Journal of Nutrition 149 (suppl._1): 2290S–2301S.
- Turner, J. H., and P. R. Turner. 1978. The Structure of Sociological Theory. Homewood, IL: Dorsey Press.
- Villalobos Dintrans, P., T. J. Bossert, J. Sherry, and M. E. Kruk. 2019. “A Synthesis of Implementation Science Frameworks and Application to Global Health Gaps.” Global Health Research and Policy 4 (25).
- Vindrola-Padros, C. 2021. Rapid Ethnographies: A Practical Guide. Cambridge: Cambridge University Press.
- Vindrola-Padros, C., and G. A. Johnson. 2020. “Rapid Techniques in Qualitative Research: A Critical Review of the Literature.” Qualitative Health Research 30 (10): 1596–1604.
- Vindrola-Padros, C., and B. Vindrola-Padros. 2018. “Quick and Dirty? A Systematic Review of the Use of Rapid Ethnographies in Healthcare Organisation and Delivery.” BMJ Quality & Safety 27 (4): 321–330.
- Virtanen, L., A.-M. Kaihlanen, E. Laukka, K. Gluschkoff, and T. Heponiemi. 2021. “Behavior Change Techniques to Promote Healthcare Professionals’ eHealth Competency: A Systematic Review of Interventions.” International Journal of Medical Informatics 149: 104432.
- Waltz, T. J., B. J. Powell, M. E. Fernandez, B. Abadie, and L. J. Damschroder. 2019. “Choosing Implementation Strategies to Address Contextual Barriers: Diversity in Recommendations and Future Directions.” Implementation Science 14 (1): 42.
- Waltz, T. J., B. J. Powell, M. M. Matthieu, L. J. Damschroder, M. J. Chinman, J. L. Smith, et al. 2015. “Use of Concept Mapping to Characterize Relationships among Implementation Strategies and Assess their Feasibility and Importance: Results from the Expert Recommendations for Implementing Change (ERIC) Study.” Implementation Science 10, 109.
- Weeks, M. R., E. Coman, H. Hilario, J. Li, and M. Abbott. 2013. “Initial and Sustained Female Condom Use among Low-Income Urban US Women. Journal of Women’s Health 22 (1): 26–36.
- Weeks, M. R., S. Liao, F. Li, J. Li, J. Dunn, B. He, et al. 2010. “Challenges, Strategies, and Lessons Learned from a Participatory Community Intervention Study to Promote Female Condoms among Rural Sex Workers in Southern China.” AIDS Education and Prevention 22 (3): 252–271.
- Weick, K. E. 1989. “Theory Construction as Disciplined Imagination.” Academy of Management Review 14 (4): 516–531.
- Westfall, J. M., J. Mold, and L. Fagnan. 2007. “Practice-Based Research—‘Blue Highways’ on the NIH Roadmap.” Journal of the American Medical Association 297 (4): 403–406.
- Zitti, T., L. Gautier, A. Coulibaly, and V. Ridde. 2019. “Stakeholder Perceptions and Context of the Implementation of Performance-Based Financing in District Hospitals in Mali.” International Journal of Health Policy and Management, 8 (10): 583.