Show Summary Details

Page of

Printed from Encyclopedia of Social Work. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 29 November 2022

Dissemination and Implementation Researchfree

Dissemination and Implementation Researchfree

  • Enola ProctorEnola ProctorFrank J. Bruno Professor of Social Work Research, Director of the Center for Mental Health Services Research, Washington University in St. Louis


Implementation research seeks to inform how to deliver evidence-based interventions, programs, and policies in real-world settings so their benefits can be realized and sustained. The ultimate aim of implementation research is building a base of evidence about the most effective processes and strategies for improving service delivery. Implementation research builds upon effectiveness research and then seeks to discover how to use specific implementation strategies and move those interventions into specific settings, extending their availability, reach, and benefits to clients and communities. This entry provides an overview of implementation research as a component of research translation and defines key terms, including implementation outcomes and implementation strategies, as well as an overview of guiding theories and models and methodological issues including variable measurement, research design, and stakeholder engagement.


  • Administration and Management
  • Populations and Practice Settings
  • Research and Evidence-Based Practice

The delivery of services that are known to be effective reflects social work’s value base and its historic commitment to scientific knowledge. Many schools of social work identify themselves as teaching the principles of evidence-based practices as well as interventions—micro and macro—that have empirical support. Moreover, agencies—the site of field training in social-work education and the source of services for clients—often align themselves with the delivery of evidence-based interventions and endorse staff training in empirically supported treatments.

Yet the actual delivery of evidence-based services to social-work clients remains a challenge. This entry focuses on implementation research, research that seeks to inform how to deliver evidence-based interventions in real-world settings so their benefits can be sustained. It provides an overview of implementation research, including definitions, the need for implementation research to inform and help close the “quality gap” in social work, guiding theories and models, and methodological issues.

What Is Implementation Research? Key Definitions

Implementation research is the scientific study of how to integrate evidence-based interventions into practice and policy (National Institutes of Health [NIH], 2013). Implementation researchers pursue a variety of questions, including how research evidence is translated to practice and how new interventions are adopted, delivered, and sustained in real-world service systems. The ultimate aim of implementation research is to build a base of evidence about the most effective processes and strategies for improving the quality of care. Implementation research must be preceded by and build upon effectiveness research, or research that tests evidence-based interventions. Implementation research then seeks to discover how to move those interventions into specific settings, extending their availability, reach, and benefits to clients and communities. Although implementation science has grown in both quality and quantity since the beginning of the 21st century (Chambers, 2012), the field continues to evolve and the most challenging questions remain unanswered. Much remains to be learned about the effectiveness of implementation strategies, the most effective strategies for various service contexts, and the scale-up and spread of effective practices for community and population benefit. Another priority topic is understanding how service systems can deliver multiple evidence-based interventions to meet chronic and co-occurring client needs.

The National Institutes of Health (NIH) link implementation research and dissemination research within the umbrella term “D&I science.” Dissemination is the “targeted distribution of information and intervention materials to a specific public health or clinical practice audience,” whereas dissemination research investigates how, when, by whom, and under what circumstances information—specifically research evidence—is spread throughout agencies, organizations, and public-health and clinical providers (Glasgow et al., 2012; NIH, 2010).

A growing, multidisciplinary literature has developed on the science and practice of implementation. As in many emergent fields, language and terminology often vary. Implementation research is no exception. What the NIH calls implementation the Canadian Institutes of Health Research describes as knowledge translation (Brownson, Colditz, & Proctor, 2012). Brekke, Ell, and Palinkas (2007) and Mullen, Bledsoe, and Bellamy (2008) used the term “translational science,” or research whose purpose is to accelerate the use of findings from our best science into usual-care settings. Creating “a common lexicon . . . of implementation . . . terminology” is important both for the science of implementation and for grounding new researchers in crucial conceptual distinctions. Rabin et al. (2012) provide a comprehensive overview of terminology for the fields of dissemination and implementation science. Implementation processes are informed by fields as diverse as psychology, health-services research, decision-making research, organizational psychology, public health, business, economics, and engineering.

Social Work’s Implementation Gap

Despite social work’s traditional role in program design and management and its strong tradition of providing skilled supervision of direct service, social-work journals have published few empirical studies of implementation. However, they increasingly publish original studies testing the effectiveness of interventions and programs, as well as systematic reviews of such research. Those sources, coupled with the intervention research published in allied fields relevant to social-work practice, yield an increasingly rich “research pipeline” of evidence that can inform social-work services. However, research moves into practice at a slow pace; according to one estimate, it can take 17 years to move 14% of research evidence into real-world services (Balas & Boren, 2000). Thus, most fields have a backlogged research pipeline (Diner et al., 2007; Kleinman & Mold, 2009; Westfall, Mold, & Fagnan, 2007), with many interventions shown to be effective but not routinely or well implemented. The “implementation gap” in a given field can be measured by the “distance” between available evidence-based interventions and interventions delivered in typical care. A landmark study showed that Americans receive health care that is congruent with evidence-based, guideline standards only 56% of the time (McGlynn et al., 2003). Data on the implementation gap in social work are limited, largely because our field lacks a strong tradition of quality-of-care research (McMillen et al., 2005; Megivern et al., 2007; Proctor, Powell, & McGinnis, 2012).

Kohl and colleagues have examined the implementation gap in one area of social-work services, training for parents at risk for child maltreatment. The team examined the parent services and their level of empirical support in community agencies, staffed largely by master’s-level social workers. Of 35 identified treatment programs offered to families, only 11% were l-established empirically supported interventions (ESI’s), with another 20% containing some hallmarks of empirically supported interventions. The evidence-based programs included parent–child interaction therapy (Capage, McNeil, Foote, & Eyberg, 1998), multisystemic therapy (Borduin et al., 1995), and the Incredible Years (Webster-Stratton, 1984, 1998). The study by Kohl, Schurer, and Bellamy (2009) reveals a sizable implementation gap, with most of the programs delivered lacking scientific validation.

Similar quality or implementation gaps are apparent in other fields where social workers deliver services. Evans, Koch, Brady, Meszaros, and Sadler (2013) found that only 19.3% of school mental-health professionals and 36.8% of community mental-health professionals working in Virginia’s schools and community mental-health centers report using any evidence-based substance prevention programs. In mental health, where social workers have long delivered the bulk of services, only 40 to 50% of people with mental disorders receive any treatment (Kessler, Chiu, Demler, Merikangas, & Walters, 2005; Merikangas et al., 2011; Substance Abuse and Mental Health Services Administration, 2010) and of those receiving treatment, a fraction receive what could be considered “quality” treatment (Wang et al., 2005).

The gap between what we know from scientific research and what is used in social-work policy, administration, and direct practice is a long-standing concern of social work (Mullen et al., 2008; Rosen, Proctor, & Staudt, 1999). Service-systems research in the social services should assess the delivery of evidence-based programs, policies, and services and identify sources of variation in their use. The provision of evidence-based services must be measured and documented at the population, organization, and provider levels (Kitson & Straus, 2009).

Theories and Frameworks for Implementation Research

Implementation research is informed by a number of conceptual models and frameworks. They serve to distinguish key components in implementation processes, identify key variables for measurement, and indicate critical relationships for testing. Tabak, Khoong, Chambers, and Brownson (2012) identify 62 different models, classify them as pertaining to dissemination or implementation, rate their “construct flexibility,” and categorize their focus as individual, organizational, community, or system.

Frameworks are particularly useful for identifying the “moving parts” or key elements in implementation processes. Proctor et al. (2009) proposed a model that distinguishes between evidence-based practices (the “what” to be implemented) and implementation strategies (the “how” for bringing those practices into new service settings). Damschroder and colleagues’ (2009) Consolidated Framework for Implementation Research (CFIR), derived from 19 different conceptual/theoretical models of dissemination or implementation, identifies five domains that are critical to successful implementation:


Intervention characteristics (evidentiary support, relative advantage, adaptability, trialability, and complexity);


The outer setting (patient needs and resources, organizational connectedness, peer pressure, external policy and incentives);


The inner setting (structural characteristics, networks and communications, culture, climate, readiness for implementation);


The characteristics of the individuals involved (knowledge, self-efficacy, stage of change, identification with organization, etc.); and


The process of implementation (planning, engaging, executing, reflecting, evaluating).

Similar to the CFIR but focused on outer and inner settings, Aarons, Hurlburt, and Horwitz (2011) provide a model specific to public social services. Their Exploration/Preparation/Implementation/Sustainment model emphasizes that implementation of evidence-based services in public service settings is influenced by factors within outer (policy, funding, and interorganizational environments) and inner (agency characteristics) contexts. The emphasis on interorganizational networks may be particularly important for social-work research because evidence-based interventions are often delivered within schools, prisons, and social-service agencies, such as divisions of aging, child welfare, homeless, and rape crisis centers. Agencies may have formal or informal relationships with organizations offering complementary services, thus comprising a network context for implementing evidence-based interventions.

Some implementation models and frameworks inform evaluation of implementation processes. The Proctor et al. model (2009, 2011) distinguishes among three types of outcomes that should be evaluated in implementation science: implementation outcomes, service-system outcomes, and client outcomes. Implementation outcomes reflect the system’s response to an intervention to be implemented, including its acceptability, adoption, penetration, cost, fidelity of its use, and sustainability over time. The RE-AIM model by Glasgow, Vogt, and Boles (1999) is widely used to inform the evaluation of an intervention’s reach, adoption, implementation, and maintenance. A full description of the RE-AIM model can be found at Stage or phase models posit that dissemination and implementation occur in a series of distinct phases, such as dissemination of information, consideration (for example, of acceptability, feasibility, appropriateness, cost), initial adoption, implementation, and sustainment or maintenance. The Department of Veteran Affairs’ QUERI initiative (Stetler, Mittman, & Francis, 2008) specifies a four-phase model spanning pilot projects, small clinical trials, regional implementation, and implementation on the national scale; and Aarons et al. (2011) developed a four-phase model of exploration, adoption/preparation, active implementation, and sustainment. Magnabosco (2006) delineates among preimplementation, initial implementation, and sustainability planning phases. An ethnographic study of clinicians (Palinkas et al., 2008), the bulk of whom were social workers, yielded a heuristic model emphasizing preimplementation determinants, short-term implementation, and long-term implementation.

The most widely cited theory within dissemination and implementation science is the diffusion of innovations theory (Dearing, 2009; Rogers, 2003). This theory is applicable to the adoption of any innovation—which new practices are usually perceived—and identifies the factors that contribute to the rate of innovation spread. Bond, Drake, and Becker (2010) use diffusion theory in their identification of intervention characteristics that affect the likelihood of their use in routine services. McDonald, Graham, and Grimshaw (2004) present a hierarchy of theories and models that organizes theories by different levels and specifies their usefulness for implementation research. For example, they differentiate between grand or macro theories [for example, Rogers’ (2003) diffusion of innovations theory], midrange theories [for example, the transtheoretical model of change (Prochaska & Velicer, 1997)], and microtheories [for example, feedback intervention theory (Kluger &DeNisi, 1996)]. Although models, frameworks, and systems are generally at a higher level of abstraction than theories, it is important to note that the level of abstraction varies both between and within the categories of the hierarchy. McDonald et al. (2004) further differentiate between classical models of change that emphasize natural or passive change processes, such as Rogers’ (2003) diffusion of innovations theory, and planned models of change that specify central elements of active implementation efforts.

Conceptual models are useful in framing a study theoretically and providing a “big picture” of the hypothesized relationships among variables, whereas midrange theories can be more helpful in justifying the selection of specific implementation strategies specifying the mechanisms by which they may exert their effects.

Theory has been underutilized and underspecified in implementation research (Davies, Walker, & Grimshaw, 2010; The Improved Clinical Effectiveness through Behavioural Research Group [ICEBeRG], 2006; Michie, Fixsen, Grimshaw, & Eccles, 2009). For example, in a review of 235 implementation studies, fewer than 25% of the studies employed theory in any way, and only 6% were explicitly theory based (Davies et al., 2010). The absence of theory in implementation research has limited our ability to specify key contextual variables and to identify the precise mechanisms by which implementation strategies exert their effects. Proctor, Powell, Baumann, Hamilton, and Santens (2012) address the value and use of theories and models for writing grant proposals in implementation science and urge implementation researchers to use selected theories and models to shape their research designs and details of the approach. Davies et al. argue that theory is particularly important in selecting implementation strategies. All research striving for generalizable knowledge should be guided by and propose to test conceptual frameworks, models, and theories (ICEBeRG, 2006). Even studies that address only a subset of variables within a conceptual model must be framed conceptually so reviewers perceive the larger context (and body of literature) that a particular study proposes to inform.

Evidence-Based Interventions for Implementation

The purpose of implementation research is to guide the delivery of effective programs, policies, and services. Thus, central to implementation are evidence-based interventions that remain underused or inaccessible, whereas untested or less effective interventions are delivered to clients. As discussed previously, the size of the implementation gap—the gap between the services that “could be” (if evidence-based services delivered) and the services that currently “are”—often influences momentum toward implementation, as does explication of the potential benefits from evidence-based services.

The strength, relevance, and fit of evidence.

The strength of the empirical evidence for a given intervention, program, or policy (Ebell et al., 2004; Oxman, 2004), a key part of “readiness,” can be assessed in several ways. Some fields of practice require programs to meet specific thresholds if they are to be deemed “evidence based” or “empirically supported” (Chambless et al., 1998; Roth & Fonagy, 2005; Weissman et al., 2006). For example, Chambless and colleagues suggest that interventions deemed efficacious should be shown to be superior to placebos or to another treatment in at least two between-group design experiments or in a large series of single-case-design experiments. Further, they state that the experiments must have been conducted with treatment manuals, the characteristics of the samples must have been clearly specified, and the effects must have been demonstrated by at least two different investigators or investigative teams. The strength of evidence for a given treatment can be classified using the Cochrane Effective Practice and Organisation of Care criteria for levels of evidence, which considers randomized controlled trials, controlled clinical trials, time-series designs, and controlled before-and-after studies as appropriate (Cochrane Effective Practice and Organisation of Care Group, 2002). Social-work researchers often consider external generalizability, considering as well the relevance of community, client, provider, and setting context in determining effectiveness.

Implementation researchers who come to implementation research as program or treatment developers often draw on their own prior work for evidence. Other researchers can assess an intervention’s readiness for implementation by reviewing published literature reviews, preferably well-conducted systematic reviews and meta-analyses of randomized-controlled trials (if available). At a minimum, “evaluability assessment” (Leviton, Khan, Rog, Dawkins, & Cotton, 2010) can help reflect what changes or improvements are needed to optimize effectiveness given the context of the implementation effort.

Characteristics of interventions and their fit with the practice contexts influence their likelihood of adoption (Cain & Mittman, 2002; Isett et al., 2007; Soydan, 2008). For example, Rogers (2003) posited that a new program’s perceived advantage, its “trialability,” perceived advantage, and its compatibility to current practice determine the ease of its uptake. Bond et al. (2010) identified “nine ideal features” of mental-health interventions; these include well-defined, reflecting client goals, consistent with societal goals, effective, minimum side effects, positive long-term outcomes, reasonable cost, easy to implement, and adaptable to diverse communities and client subgroups.

How adequate is the “supply” of evidence-based programs, policies, and services? Since the beginning of the 21st century, social-work researchers have increasingly engaged in developing and testing interventions. Another encyclopedia entry addresses evidence-based practice (Proctor, Powell, and McGinnis, 2012). Yet intervention supply, effectiveness, accessibility, and properties all bear on their dissemination and implementation and thus should be addressed in this entry as well. Indeed, interventions cannot be implemented in practice and delivered to clients who could benefit from them if they are not known to providers and other key decision makers who influence service delivery.

Making evidence easy to find.

The social-work field must facilitate dissemination of information about evidence-based social-work practices, including through the use of published articles that consolidate and systematically review intervention effectiveness. Social-work journals have recently published systematic reviews of the effectiveness of interventions for such problems as truancy (Maynard, Tyson McCrea, Pigott, & Kelly, 2013; Sutphen, Ford, & Flaherty, 2010), adolescent cannabis use (Bender, Tripodi, Sarteschi, & Vaughn, 2011), cyberabuse of youth (Mishna, Cook, Saini, Wu, & MacFadden, 2011), and older adult depression (Ell, Aranda, Xie, Lee, & Chou, 2010), as well as for parent–child interaction therapy (Lanier et al., 2011), community practice interventions (Ohmer & Korr, 2006), and group care for youth (Lee, Bright, Svoboda, Fakunmoju, & Barth, 2011). Social-work professional societies and the social-work research community could further facilitate implementation by developing and disseminating consensus statements, practice guidelines (Proctor, 2007; Rosen & Proctor, 2003), Cochrane (, or Campbell Collaboration ( reviews. Clearinghouses are another important source in making evidence available and accessible to professionals, decision makers, and other “end users” (Soydan, Mullen, Alexandra, Rehnman, & Li, 2010). For example, the California Evidence-Based Clearinghouse for Child Welfare ( provides ratings of the research evidence for child welfare–related programs as well as information about implementation for programs that have been shown to be supported by research evidence. A representative from each program rated as well supported by research or supported by research evidence was asked to provide information about implementation resources for the program. Treatment manuals that specify provider behaviors, the dose and timing of intervention components, and infrastructure requirements for empirically supported interventions also will facilitate implementation. Treatment developers can design for dissemination and implementation at the front end, considering these factors, conducting research to document them, and highlighting them in dissemination messages.

Stakeholder Perspectives and Engagement in Implementation Research

Successful implementation of evidence-based interventions requires the engagement of a variety of stakeholders, whose preferences and priorities vary but must be addressed in implementation plans and processes. Stakeholders with an investment in implementation include treatment or program developers, researchers, administrators, providers, funders, community-based organizations, consumers, families, and perhaps legislators who shape reimbursement policies. Stakeholders are likely to vary in their knowledge, perceptions, and preferences for service delivery. For example, treatment or program developers typically prioritize fidelity to the model, as well as sustainability over time. Some treatments and programs are disseminated by purveyor organizations, which value scale-up and spread of the intervention and favor training and technical assistance as implementation strategies. Those who pay for services, be they private grant or public funders, are likely to value efficiency, cost-effectiveness, and rapid return on investment. Those who lead or govern service organizations, such as boards of directors and agency directors, may want their organizations to implement those programs that have identifiable sources of external funding; they also may value implementation of new programs for their potential to enhance “market niche” or enhance their agency’s distinctiveness (Proctor et al., 2007). On the other hand, when agency budgets are constrained, agency leaders may be disinclined to adopt new programs and treatments that carry high training costs. Providers are shown to value training in evidence-based practices that have a good fit with their clients’ needs, provide continuing educational opportunities, and are advanced beyond “beginning level” (Powell, McMillen, Hawley, & Proctor, 2013). Unlike pharmacology, where direct consumer marketing strives to increase patient demand for specific medications, the social services have paid too little attention to the potential role of clients to increase demand for new and effective programs and treatments (Megivern et al., 2007).

A key function of stakeholder engagement is ensuring or enhancing the appropriateness of the evidence-based interventions being implemented. Although few culturally adapted evidence-based treatments attend explicitly to strategies for their implementation, few implementation trials have adequately documented processes to ensure cultural adaptation, leading Cabassa and Baumann (2013) to call for a “two-way street” between the fields of implementation science and cultural adaptations. As Yancy, Glenn, Bell-Lewis, and Ford (2012) put it, the message of evidence-based interventions must be appropriate for the community. Such setting factors as agency and community capacity, willingness, and culture further affect the success with which evidence-based interventions are implemented in a culturally congruent way (Zayas, Bellamy, & Proctor, 2012).

Lindamer and colleagues (2009) describe three different approaches to stakeholder engagement in research, with varying levels of participation and decisions. In “community-targeted” research, stakeholders are involved in recruitment and in the dissemination of the results. Stakeholder analysis can be carried out to evaluate and understand stakeholders’ references and priorities (Shumway et al., 2003). The information gathered from stakeholder analysis can then be used to develop strategies for collaborating with stakeholders, to facilitate the implementation of decisions or organizational objectives, or to understand the future of policy directions (Brugha & Varvasovszky, 2000; Varvasovszky & Brugha, 2000).

Contextual Factors in Implementation

The 2011 NIH Training in Dissemination and Implementation Research in Health (NIH, 2011) used the working theme “it’s all about context” to emphasize the critical impact of context on implementation success. Indeed, contextual factors may be among the strongest influences on the success of implementation and therefore must be carefully addressed in implementation research (Dearing, 2008). Contextual factors are implicated in the extensive research on barriers to implementation, and the frameworks, conceptual models, and theories discussed above point to multiple levels or components of context.

Most conceptual models of implementation reflect the inherently multilevel nature of practice change. For example, Shortell’s (2004) classic multilevel model identifies the top, or policy level, an organizational level, a team level, and an individual level of activity in practice improvement change; each level reflects contextual factors that must be specified and addressed in implementation research.

The top level or policy context is included in Damschroder et al.’s (2009) “outer” level factors within the CFIR and is elaborated as the “policy ecology of implementation” by Raghavan, Bright, and Shadoin (2008), who highlight the ways in which the organizational, policy, and funding environment must be engaged for long-term sustainability of the implementation. Social policies impact quality of care, and a host of legal, reimbursement, and regulatory factors affect the adoption and sustainability of evidence-based interventions (Proctor et al., 2007). Implementation of a new intervention typically incurs a variety of additional costs, such as those for training and consultation, supervision required for fidelity of intervention delivery, and infrastructure changes associated with embedding standardized assessments into routine forms and databases. Some such costs can be met through external grant funding, which often serves as a positive “lever” for implementing new programs. Others must be borne by the agency, and their assessment may influence the likelihood, timing, or scale of the implementation undertaken. Most community-based settings operate under reimbursement mechanisms that rarely cover the costs of implementing new interventions (Raghavan, 2012). Researchers are advised to assess the extent to which the implementation efforts they propose to launch and study align with policy trends (Proctor, Powell, Baumann, et al., 2012). Economic analyses are extremely valuable in systematically examining the costs and cost returns associated with initial adoption, sustained implementation over time, and scale-up of empirically supported intervention delivery to meet community need (Raghavan).

The “middle” two levels of context in Shortell’s (2004) model comprise the service-providing organization and provider groups or teams within those organizations. A host of organizational constructs and processes are associated with successful program or intervention implementation, including organizational culture, organizational climate, leadership, organizational readiness for change, managerial relations, and absorptive capacity (Aarons, Horowitz, Dlugosz, & Ehrhart, 2012; Emmons, Weiner, Fernandez, & Tu, 2012). A number of scales capture the key aspect of context, the setting’s readiness or capacity for change (Funk, Champagne, Wiese, & Tornquist, 1991; Larson, 2004; Wensing & Grol, 2005). Implementation researchers should address the alignment of the implementation effort to setting or organizational priorities or to current or emergent policies. Implementation of evidence-based practices in social-service agencies is challenged by high staff turnover rates, as discussed by social-work researchers Glisson et al. (2008b) and Woltmann et al. (2008).

The team level of context is particularly germane to social work. Most social and human services are supervised or monitored by teams, such as units, programs, or supervisor–provider teams. Implementation of evidence-based interventions is facilitated by the commitment of practice leaders—perhaps the agency director, the board of directors, and senior managers—to providing services and treatments that are evidence based (Klein & Sorra, 1996). Supervisor commitment, knowledge, and skill are also important, given their visibility, accessibility, and influence. Supervisors are key to ensuring that frontline providers acquire and receive support in using the intervention skills associated with evidence-based practices (Proctor, 2004). A growing body of research focuses on the kind of leadership required to implement evidence-based practices. Aaron’s full-range leadership model focuses on leader–member exchange and transformational leadership (Ragins & Kram, 2007; Redman, 2006), the degree to which a leader can inspire and motivate others to follow an ideal or a particular course of action (Laske, 2004).

The actual adoption and delivery of evidence-based interventions depends on the front-line practitioner. An organization’s plan to deliver new and more effective services may be limited or facilitated by provider attitudes toward adoption of new treatments, interventions, and practices. And professionals can only deliver programs and services that they know about, are trained in, and are skilled in delivering. Research demonstrates low rates of awareness and training among substance-abuse, school, and community mental-health professionals, including social workers. Several studies have examined awareness of evidence-based practice and practice guidelines among mental-health and social-service professionals. Mullen and Bacon (2003, 2004) found marked professional differences in awareness of practice guidelines. In contrast to psychiatrists, nearly all of whom had heard of and were aware of at least one specific practice guideline, few social workers were aware of guidelines to inform their practice. Whereas 18% of social workers said they had used a guideline, their understanding of guidelines was vague and only 3 of 81 respondents could comment upon their usefulness. Mullen and Bacon (2004) also found that social workers read and used research literature for practice decisions less frequently than psychiatrists and psychologists. In a study of public-sector mental-health clinicians and program managers serving children and families, Aarons (2004) also found “low awareness” of evidence-based practice. Evans et al. (2013) report that 45% of school mental-health professionals are unfamiliar with any evidence-based substance-use prevention programs. A major factor in social workers’ unfamiliarity with evidence-based practices, and thus their inability to deliver them, seems to be professional training. According to national surveys of psychotherapy training in psychiatry, psychology, and social work, about two thirds of social-work and professional clinical psychology programs do not require didactic or clinical supervision in any evidence-based therapy (Bledsoe, et al., 2007; Weissman et al., 2006).

In summary, policy-, organizational-, team-, and provider-level factors affect the success of implementation. Each of these contextual factors must be carefully assessed and leveraged or targeted through the use of implementation strategies (Proctor, 2004; Proctor et al., 2009).

Implementation Strategies

The successful implementation of evidence-based interventions requires two types of interventions or technologies: the evidence-based practices, programs, or treatments being implemented and a distinct technology for moving those practices into service system settings of care (Proctor et al., 2009). These latter technologies are the “implementation strategies.” Described as specified activities designed to put into practice an activity or program of known dimensions (Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005), implementation strategies are defined as “deliberate and purposeful efforts to improve the uptake and sustainability of treatment interventions” (Proctor et al., 2009, p. 5). Although the assessment of implementation barriers is important in implementation research, the “rising bar” in the field of implementation science demands that investigators move beyond the study of barriers to research that generates knowledge about the implementation processes and specific strategies that can lead to sustained adoption and delivery of evidence-based interventions. Accordingly, the NIH has prioritized efforts to “identify, develop, and refine effective and efficient methods, structures, and strategies to disseminate and implement” innovations in health care (NIH, 2010).

The implementation of evidence-based interventions is a dynamic, complex process that occurs in the context of other, ongoing change. Accordingly, implementation strategies must address the challenges of the service system (for example, specialty mental health, schools, criminal justice system, health settings) and practice settings (community agency, national EAPs, office-based practice), as well as the human capital challenge of staff training and support. Moreover, properties of the evidence-based interventions vary in ways that affect the implementation and dictate features of the implementation strategies: their complexity, infrastructure demands.

Sources of implementation strategies.

Although the number of identifiable evidence-based treatments clearly outstrips the number of evidence-based implementation strategies, the evidence base about strategies is growing. Over 40 reviews, lists, and “compilations” of implementation strategies have been published. Powell et al. (2012) compiled a “menu” of strategies for implementing clinical innovations in health and mental health. Based on a review of 205 sources, they list and define 68 implementation strategies, grouped by six key processes: planning, educating, financing, restructuring, managing quality, and attending to policy context (Powell et al.). Strategies include methods for provider training and decision support; intervention-specific tool kits, checklists, and algorithms; formal practice protocols and guidelines; learning collaboratives, business strategies (for example, Deming/Shewart Plan–Do–Check–Act), and organizational interventions from management science; and economic, fiscal, and regulatory contingencies. Strategies have been characterized broadly as “top down/bottom up,” “push/pull,” or “carrot/stick” tactics—typically combined in “package” approaches (Proctor et al., 2009).

Specification of implementation strategies.

To guide implementation practice, the literature on implementation strategies must provide more specific descriptions of the strategies, their “dosage,” and variation over time (Proctor, Powell, & McMillen, 2013). Moreover, the theory underlying their use and their precise components and mechanism of change must be specified. Training, coaching, and tools such as manuals and logic models are needed for implementation strategies, in much the same way as they are required for evidence-based interventions. Mittman (2010) adds that implementation strategies should be multifaceted or multilevel (if appropriate); robust or readily adaptable; feasible and acceptable to stakeholders; compelling, saleable, trialable, and observable; sustainable; and scalable (Implementation Research Institute, 2013). Because implementers are interested in budget impact, studies of the incremental costs and cost-effectiveness of implementation strategies are needed as well (Mauskopf et al., 2007; Raghavan, 2012). Although budget impact is a key concern to administrators and some funding agencies require budget impact analysis, implementation science to date suffers a dearth of economic evaluations from which to draw (Eccles et al., 2009; Glasgow, 2009).

The ARC model is a manualized multicomponent, team-based, organizational implementation strategy developed and tested by social-work researcher Charles Glisson (Glisson & Schoenwald, 2005; Glisson et al., 2010). Designed to improve the social context in which services are provided, the ARC model has been used and found effective in social-service, child welfare, health, and mental-health settings. Also highly relevant to social work is an external facilitation implementation strategy, used to increase the use of cognitive–behavioral therapy within Veteran Affairs clinics (Kauth et al., 2010). This strategy proved to be low-cost, feasible, and scalable.

Effectiveness of implementation strategies.

The empirical evidence for the effectiveness of multifaceted strategies has been mixed. Early reviews reported multifaceted strategies as most effective (Solberg et al., 2000; Wensing, Weijden, & Grol, 1998), but a systematic review of 235 implementation trials by Grimshaw et al. (2004) found no relationship between the number of component interventions and the effects of multifaceted interventions. However, Wensing, Bosch, and Grohl (2009) note that although multifaceted interventions were assumed to address multiple barriers to change, many focus on only one barrier. For instance, providing training and consultation is a multifaceted implementation strategy; however, it primarily serves to increase provider knowledge and does not address other implementation barriers. Given the multilevel change required for most implementation efforts and consistent with conceptual models such as CFIR, multifaceted strategies may prove more effective when they target different components of context their associated barriers or challenges. For example, multifaceted strategies might address provider knowledge and organizational context, rather than all target provider-level factors (Wensing et al. 2009). Magnabosco (2006), Rapp et al. (2008), and Bond, Drake, McHugo, Rapp, and Whitley (2009) describe implementation strategies that emerged during the course of a large-scale implementation effort involving five different empirically supported treatments. Although the effectiveness of these strategies has not been rigorously researched, their anecdotal evidence suggests the need for multifaceted implementation strategies that flexibly address barriers to change at different levels and stages of implementation.

Implementation strategies should be chosen for their ability to leverage facilitative aspects of the practice context and their potential for mitigating identified barriers. The actual implementation effort is likely to require careful evaluation and continual adaptation and adjustment based on give and take among intervention developers, service system researchers, organizations, providers, and consumers (Aarons & Palinkas, 2007).

The establishment of implementation technical assistance centers is a promising development, offering potential adopters of evidence-based interventions support, guidance, and sometimes resources. The Veteran’s Administration’s (VA) Center for Implementation Practice and Research Support offers education and technical assistance for VA implementation practice and to VA implementation researchers. The center also facilitates better linkages and partnerships between VA implementation researchers and VA clinical practice and policy leaders ( The Children’s Bureau, within the Department of Health and Human Services, funds five regional implementation centers within its Training and Technical Assistance Network to help states and tribes improve the quality of child welfare services, including, in some cases, the implementation of evidence-based programs ( Some states offer technical assistance for implementing evidence-based treatment, as does the Substance Abuse and Mental Health Services Administration through its National Registry of Evidence-Based Programs and Practices “learning center” and Tribal Training and Technical Assistance Center ( Not all the programs addressed within implementation resource centers are evidence based, and the effectiveness of various models of their various approaches to technical assistance, training, and support remains an important priority for implementation research.

Measurement and Design in Implementation Research

Implementation science encompasses a broad range of constructs from a variety of disciplines. Given its relatively early stage of development, the field has yet to achieve agreement on definitions of constructs across different studies, fields, authors, or research groups. Yet methods are rapidly developing, many of them highly innovative.

Implementation outcomes.

Outcomes for dissemination and implementation science have been distinguished and conceptualized (Brownson et al., 2012; Proctor et al., 2009). Two current initiatives seek to advance the harmonization, standardization, and rigor of measurement in implementation science, the U.S. National Cancer Institute’s Grid-Enabled Measures portal (Grid-enabled Measures Database, 2013) and the Comprehensive Review of Dissemination and Implementation Science Instruments efforts supported by the Seattle Implementation Research Conference at the University of Washington (Instrument Review Project, 2013). Both initiatives engage the implementation science research community to enhance the quality and harmonization of measures. Their respective web sites are being populated with measures and ratings, affording grant writers an invaluable resource in addressing a key methodological challenge.

Consistent with the emphasis on training as an implementation strategy, one of the most frequently assessed implementation outcomes is the fidelity with which an evidence-based intervention is delivered. Fidelity measures are often specific to a particular intervention. Social-work research has emphasized the implementation outcome of provider attitudes toward evidence-based interventions or the acceptability of evidence-based service. Aarons’ (2004) Evidence-Based Practice Attitudes scale is a widely used, standardized measure for this construct. Although primarily a measure of acceptability, certain questions also reflect provider views of the appropriateness and intention to adopt evidence-based interventions.

Measures of implementation context include several constructs linked to the CFIR, as well as several organizational factors. Summaries of organizational factors and associated measurement issues are provided by Aarons et al. (2012) and Kimberly and Cook (2008). Glisson et al.’s (2008a) Organizational Social Context instrument is a psychometrically sound measure of organizational culture, climate, and work attitudes that has been used primarily in children’s mental-health and child welfare settings.

Penetration, or the integration of a treatment within a service setting, programs, therapists, and clients, is one of the most important and straightforward implementation outcomes to measure. The numerator is simply the number of providers who deliver a given evidence-based intervention, or the number of clients who receive it, whereas the denominator can be the number of providers who were trained in or expected to deliver the intervention or the number of clients who were eligible to receive it. From a service-system perspective, the construct is also similar to “reach” in the RE-AIM framework (Glasgow et al., 1999).

Research designs.

Apropos the complexity of its research questions, methods for implementation science have dramatically advanced since 2009. Recent publications (Landsverk et al., 2012) highlight design innovations, including creative alternatives or adaptations to randomization. These include practical, pragmatic, and preference trials, interrupted time series, dynamic wait-list and stepped-wedge designs, need-based allocation, restricted cohort designs, and use of instrumental variables to achieve needed controls. Although randomized control trials have been the gold standard for clinical research (see Cochrane Effective Practice and Organisation of Care Group, 2002) because these designs have high internal validity, they often have little external validity (generalizability) to real-world settings. The need for generalizability of evidence-based interventions has been the driving force in examining alternative research designs for implementation studies. Concato, Shah, and Horwitz (2000) challenge the traditional hierarchy of research designs and provide evidence to support that well-designed observational studies produced results similar to randomized controlled trials of the same intervention.

“Hybrid” designs that combine testing an intervention with testing implementation processes (Curran, Bauer, Mittman, Pyne, & Stetler, 2012) have several advantages. Type-one hybrid designs enable gathering (even preliminary) information about implementation while still testing the evidence-based treatment’s effectiveness, thereby speeding scientific progress on the translational science continuum. Type-two hybrid designs balance the emphasis on understanding an intervention’s effectiveness and the success of its implementation. Type-three hybrid designs focus primarily on implementation but support the continued acquisition of information about an intervention’s effectiveness, sometimes in a new service delivery setting that differs from the one in which its effectiveness was originally tested.

Applying these designs in real-world research and negotiating the recruitment, allocation, and analysis implications remain extremely challenging. Implementation processes operate at multiple levels of health-providing systems and have proven to be both complex and nonlinear. Whereas conceptual models of implementation processes include multilevel, ecological variables, many researchers are still trained in “methods and analytic techniques that ignore context, focus on single snapshots in time, assume processes only operate at a single level of analysis, and are not dynamic” (Luke, 2012, p. 169). Many of the conceptual and measurement challenges can best be addressed through network analysis, decision science, and microsimulation modeling (Holmes, Finegood, Riley, & Best, 2012; Luke, 2012; Valente, 1995, 2012). Full understanding of complex practice-change processes usually demands mixing qualitative and quantitative methods and data (Palinkas et al., 2011a, 2011b; Saldana, Chamberlain, Wang, & Brown, 2012).

Partnered research.

Implementation research is necessarily multidisciplinary and requires a convergence of perspectives, requiring collaboration among treatment developers with intervention expertise, service-system researchers with setting expertise, and quality improvement researchers who bring conceptual frameworks and methodological expertise for the multilevel strategies required to change systems, organizations, and providers. Because implementation research occurs in the “real world” of community-based settings of care, implementation researchers also must partner with community stakeholders, particularly agencies (Chambers & Azrin, 2013; Proctor & Rosen, 2008). Although knowledge of partnered research is evolving, much of that literature remains anecdotal, case study, or theoretical, with collaboration and partnership broadly defined ideals. Recent notable advances in the mental-health field include Sullivan et al.’s (2005) innovative mental-health clinical partnership program within the Veterans Health Administration, designed to enhance the research capacity of clinicians and the clinical collaborative skills of researchers, and Jones and Wells’ (2007) community-participatory partnered research to extend depression care to diverse communities. Social-work researchers Hasche et al. (2013) illustrate the process of adapting an evidence-based treatment in partnership with a social-service setting to facilitate the treatment’s implementation in a new service setting. Borkovec (2004) argues for developing practice research networks, providing an infrastructure for practice-based research and more efficient integration of research and practice. Powell et al. (2013) leverage a practice-based research network of Medicaid mental-health providers to learn more about their interests and willingness to participate in evidence-based practice training. In “community-based” research, stakeholders participate in selecting research topics, but researchers make final decisions on study design and methods. “Community-driven” or community-based participatory research engages stakeholders in all aspects of the research, as illustrated by Jones and Wells (2007). Community-based participatory research models are advocated for their potential to decrease the gap between research and practice by reducing implementation and dissemination barriers (Chen, Diaz, Lucas, & Rosenthal, 2010; Kerner, Rimer, & Emmons, 2005; Wallerstein & Duran, 2010), enhancing external validity, and increasing sustainability of the programs and services being implemented (Kerner et al., 2005). Lobb and Colditz (2013) give specific examples of improving implementation of evidence-based interventions through participatory research. Partnerships among intervention and services researchers, policy makers, administrators, providers, and consumers hold great promise for bridging the oft-cited gap between research and practice. Implementation research requires a unique understanding and use of such partnerships.


Key challenges in crafting the analysis plan for implementation studies include: (a) determining the unit of analysis, given the “action” at individual, team, organizational, and policy environments; (b) shaping meditational analyses given the role of contextual variables; and (c) developing and using appropriate methods for characterizing the speed, quality, and degree of implementation. Any implementation study’s design, assessment tools, analytic strategies, and analytic tools must address these challenges in some manner (Landsverk et al., 2012). The testing of implementation strategies or processes often begins with preliminary data from small-scale pilot studies to examine feasibility and assess sources of variation. However, the magnitude of effects in small pilots should be determined by clinical relevance (Landsverk et al.), given the uncertainty of power calculations from small-scale studies (Kraemer, Mintz, Noda, Tinklenberg, & Yesavage, 2006).

Future Directions

As recently as 2005, implementation science still needed a better understanding of the barriers and facilitators to adopting evidence-based care. Now the most pressing research questions include how to scale up evidence-based interventions for wider reach and impact, how to address multiple levels of change in service systems, and how large systems of care can adopt and sustain multiple treatment innovations, given the multiple problems faced by nearly all social-service clients. In his foreword to Dissemination and Implementation Research in Health, David Chambers wrote, “Scientifically, we have yet to progress to a long-term view of dissemination and implementation. The next generation of studies to get us there will address the sustainable integration of interventions within dynamic health care delivery systems and the implementation of evidence-based systems of care rather than the individual intervention” (Saldana, Chamberlain, Bradford, Campbell, & Landsverk, 2013, p. ix). Implementation research will focus less on adoption of individual treatments and more on how systems can adopt, adapt, and sustain multiple new treatments—many of which are yet undeveloped. Research will seek to identify, test (comparatively, and with cost data), and understand implementation strategies. Studies will discover how implementers can select and deploy strategies that best fit their system’s context.

Such questions require more robust theory, including new theories of sustainability, scale-up, and strategy effectiveness that include moderator and mediator effects of service delivery context. Studies are needed that capture stages of implementation processes and their completion, implementation costs, and how implementers wrestle with sustaining, adopting, and deadopting evidence-based interventions in the face of new evidence (Glasgow et al., 2012; Wiltsey Stirman et al., 2012). Given the many players in practice change, researchers will need better training in engaging and capturing the preferences and priorities of stakeholders (Meissner et al., 2013; Pangaea Global AIDS Foundation, 2009). The field will no doubt explore how to better use technology to support and track change. In short, tomorrow’s implementation researchers will confront different priority questions, draw on a more robust knowledge base, and use methods that are only emerging today.

Social workers, whom Brekke et al. (2007) argue are “ideally positioned” to influence a national translational agenda, have many important roles in conducting and advancing the state of implementation research. Social work needs better documentation of the quality of care received by communities and clients, a stronger repertoire of evidence-based interventions, stronger and more durable partnership with stakeholders, and more effectiveness, cost, and comparative effectiveness research on implementation strategies. A return on the investment of the billions of dollars spent on the development and testing of programs and interventions requires commensurate investment focused on advancing the science of implementation. Moreover, training support is needed for implementation researchers as well as for implementation practice.

If implementation science is to impact practice and help resolve the large gaps in the social services, new investigators must be trained to pose and pursue more demanding, more relevant, and more challenging research questions. Training for implementation research is available through three NIH-funded initiatives. The Implementation Research Institute, led by a social-work researcher and based in a school of social work, provides two years of mentored training in implementation research for mental health (Proctor, Landsverk, et al., 2013). The Training in Dissemination and Implementation Research in Health conference provides one-week immersion training in implementation research across multiple fields of health (Meissner et al., 2013). Social workers are among the conference alumni. A program to be launched in 2014, also based at a school of social work, will provide NIH-funded training for dissemination and implementation research in cancer.


  • Aarons, G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74. doi:10.1023/B:MHSR.0000024351.12294.65
  • Aarons, G. A., Horowitz, J. D., Dlugosz, L. R., & Ehrhart, M. G. (2012). The role of organizational processes in dissemination and implementation research. In: R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 128–153). New York, NY: Oxford University Press.
  • Aarons, G. A., Hurlburt, M., & Horwitz, S. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. doi:10.1007/s10488-010-0327-7
  • Aarons, G. A., & Palinkas, L. A. (2007). Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34(4), 411–419. doi:10.1007/s10488-007-0121-3
  • Balas, E. S., & Boren, S. A. (2000). Managing clinical knowledge for health care improvement. Yearbook of medical informatics: 2000 patient-centered systems (pp. 65–70). Stuttgart, Germany: Schattauer.
  • Bender, K., Tripodi, S. J., Sarteschi, C., & Vaughn, M. G. (2011). A meta-analysis of interventions to reduce adolescent cannabis use. Research on Social Work Practice, 21(2), 153–164. doi:10.1177/1049731510380226
  • Bledsoe, S. E., Weissman, M. M., Mullen, E. J., Ponniah, K., Gameroff, M. J., Verdeli, H., et al. (2007). Empirically supported psychotherapy in social work training programs. Research on Social Work Practice, 17, 449–455.
  • Bond, G. R., Drake, R., & Becker, D. (2010). Beyond evidence-based practice: Nine ideal features of a mental health intervention. Research on Social Work Practice, 20(5), 493–501. doi:10.1177/1049731509358085
  • Bond, G. R., Drake, R. E., McHugo, G. J., Rapp, C. A., & Whitley, R. (2009). Strategies for improving fidelity in the National Evidence-based Practices Project. Research on Social Work Practice, 19(5), 569–581. doi:10.1177/1049731509335531
  • Borkovec, T. D. (2004). Research in training clinics and practice research networks: A route to the integration of science and practice. Clinical Psychology: Science and Practice, 11, 211–215. doi:10.1093/clipsy.bph073
  • Borduin, C. M., Mann, B. J., Cone, L. T., Henggeler, S. W., Fucci, B. R., Blaske, D. M., et al. (1995). Multisystemic treatment of serious juvenile offenders: Long-term prevention of criminality and violence. Journal of Consulting and Clinical Psychology, 63, 569–578. doi:10.1037/0022-006X.63.4.569
  • Brekke, J. S., Ell, K., & Palinkas, L. A. (2007). Translational science at the National Institute of Mental Health: Can social work take its rightful place? Research on Social Work Practice, 17(1), 123–133. doi:10.1177/1049731506293693
  • Brownson, R. C., Colditz, G. A., & Proctor, E. K. (Ed.). (2012). Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press.
  • Brugha, R., & Varvasovszky, Z. (2000). Stakeholder analysis: A review. Health Policy Plan, 15, 239–246.
  • Cabassa, L. J., & Baumann, A. B. (2013). A two-way street: Bridging implementation science and cultural adaptations of mental health treatments. Implementation Science, 8, 90.
  • Cain, M., & Mittman, R. (2002). Diffusion of innovation in health care. California HealthCare Foundation. Retrieved June 7, 2010, from
  • Capage, L. C., McNeil, C. B., Foote, R., & Eyberg, S. M. (1998). Parent–child interaction therapy: An effective treatment for young children with conduct problems. The Behavior Therapist, 21, 137–138.
  • Chambers, D. (2012). Forward. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. vii–ix). New York, NY: Oxford University Press.
  • Chambers, D. A., & Azrin, S. T. (2013). Research and services partnerships: Partnership: A fundamental component of dissemination and implementation research. Psychiatric Services, 64(6), 509–511. doi:10.1176/
  • Chambless, D. L., Baker, M. J., Baucom, D. H., Beutler, L. E., Calhoun, K. S., Crits-Christoph, P., et al. (1998). Update on empirically validated therapies, II. The Clinical Psychologist, 51, 3–16.
  • Chen, P. G., Diaz, N., Lucas, G., & Rosenthal, M. S. (2010). Dissemination of results in community-based participatory research. American Journal of Preventative Medicine, 39(4), 372–378.
  • Cochrane Effective Practice and Organisation of Care Group. (2002). Data collection checklist. EPOC measures for review authors. Retrieved from
  • Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342(25), 1887–1892. doi:10.1056/NEJM200006223422507
  • Curran, G. M., Bauer, M., Mittman, B., Pyne, J. M., & Stetler, C. (2012). Effectiveness–implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50(3), 217. doi:10.1097/MLR.0b013e3182408812
  • Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(50), 1–15.
  • Davies, P., Walker, A. E., & Grimshaw, J. M. (2010). A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science, 5(14), 1–6.
  • Dearing, J. W. (2008). Evolution of diffusion and dissemination theory. Journal of Public Health Management and Practice, 14, 99–108.
  • Dearing, J. W. (2009). Applying diffusion of innovation theory to intervention development. Research on Social Work Practice, 19(5), 503–518. doi:10.1177/1049731509335569
  • Diner, B. M., Carpenter, C. R., O’Connell, T., Pang, P., Brown, M. D., Seupaul, R. A., et al. (2007). Graduate medical education and knowledge translation: Role models, information pipelines, and practice change thresholds. Academic Emergency Medicine, 14(11), 1008–1014. doi:10.1111/j.1553-2712.2007.tb02381.x
  • Ebell, M. H., Siwek, J., Weiss, B. D., Woolf, S. H., Susman, J., Ewigman, B., et al. (2004). Strength of recommendation taxonomy (SORT): A patient-centered approach to grading evidence in the medical literature. Journal of the American Board of Family Medicine, 17(1), 59–67. doi:10.3122/jabfm.17.1.59
  • Eccles, M. P., Armstrong, D., Baker, R., Cleary, K., Davies, H., Davies, S., et al. (2009). An implementation research agenda. Implementation Science, 4, 1–7.
  • Ell, K., Aranda, M. P., Xie, B., Lee, P. J., & Chou, C. P. (2010). Collaborative depression treatment in older and younger adults with physical illness: Pooled comparative analysis of three randomized clinical trials. American Journal of Geriatric Psychiatry, 18(6), 520–530. doi:10.1097/JGP.0b013e3181cc0350
  • Emmons, K., Weiner, B., Fernandez, M., & Tu, S. P. (2012). Systems antecedents for dissemination and implementation: A review and analysis of measures. Health Education and Behavior, 39(1), 87–105. doi:10.1177/1090198111409748
  • Evans, S. W., Koch, R., Brady, C., Meszaros, P., & Sadler, J. (2013). Community and school mental health professionals’ knowledge and use of evidence based substance use prevention programs. Administration and Policy in Mental Health and Mental Health Services Research, 40(4), 319–330.
  • Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. FMHI Publication No. 231. Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. Retrieved from
  • Funk, S. G., Champagne, M. T., Wiese, R. A., & Tornquist, E. M. (1991). BARRIERS: The barriers to research utilization scale. Clinical Methods, 4, 39–45.
  • Glasgow, R. E. (2009). Critical measurement issues in translational research. Research on Social Work Practice, 19(5), 560–568. doi:10.1177/1049731509335497
  • Glasgow, R. E., Vinson, C., Chambers, D., Khoury, M. J., Kaplan, R. M., & Hunter, C. (2012). National Institutes of Health approaches to dissemination and implementation science: Current and future directions. American Journal of Public Health, 102(7), 1274–1281. doi:10.2105/AJPH.2012.300755
  • Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health, 89(9), 1322–1327. doi:10.2105/AJPH.89.9.1322
  • Glisson, C., Landsverk, J., Schoenwald, S., Kelleher, K., Hoagwood, K. E., Mayberg, S., et al. (2008a). Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 98–113. doi:10.1007/s10488-007-0148-5
  • Glisson, C., & Schoenwald, S. K. (2005). The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Mental Health Services Research, 7(4), 243–259. doi:10.1007/s11020-005-7456-1
  • Glisson, C., Schoenwald, S. K., Hemmelgarn, A. L., Green, P. D., Dukes, D., Armstrong, K. S., et al. (2010). Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology, 78(4), 537–550. doi:10.1037/a0019160
  • Glisson, C., Schoenwald, S. K., Kelleher, K., Landsverk, J., Hoagwood, K. E., Mayberg, S., et al. (2008b). Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 124–133. doi:10.1007/s10488-007-0152-9
  • Grid-enabled measures database. (2013). Retrieved October 21, 2013, from
  • Grimshaw, J. M., Thomas, R. E., MacLennan, G., Fraser, C., Ramsay, C. R., Vale, L., et al. (2004). Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment Journal, 8(6), 1–72.
  • Hasche, L., Lenze, S., Brown, T., Lawrence, L., Nickel, M., Morrow-Howell, N., et al. (2013). Adapting collaborative depression care for public community long-term care: Using research–practice partnerships. Administration and Policy in Mental Health and Mental Health Services Research. Advance online publication. doi:10.1007/s10488-013-0519-z
  • Holmes, B. J., Finegood, D. T., Riley, B. L., & Best, A. (2012). Systems thinking in dissemination and implementation research. In R. C. Brownson, G. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 175–191). New York, NY: Oxford University Press.
  • Implementation Research Institute. (2013). Retrieved October 31, 2013, from
  • The Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG). (2006). Designing theoretically-informed implementation interventions. Implementation Science, 1(4), 1–8.
  • Instrument review project: A comprehensive review of dissemination and implementation science instruments (SIRC). (2013). Retrieved May 31, 2013, from
  • Isett, K. R., Burnam, M. A., Coleman-Beattie, B., Hyde, P. S., Morrissey, J. P., Magnabosco, J., et al. (2007). The state policy context of implementation issues for evidence-based practices in mental health. Psychiatric Services, 58(7), 914–921.
  • Jones, L., & Wells, K. (2007). Strategies for academic and clinician engagement in community-participatory partnered research. Journal of American Medical Association, 297(4), 407–410.
  • Kauth, M. R., Sullivan, G., Blevins, D., Cully, J. A., Landes, R. D., Said, Q., et al. (2010). Employing external facilitation to implement cognitive behavioral therapy in VA clinics: A pilot study. Implementation Science, 5(75), 1–11. doi:10.1186/1748-5908-5-75
  • Kerner, J., Rimer, B., & Emmons, K. (2005). Dissemination research and research dissemination: How can we close the gap? Health Psychology, 24(5), 443–446. doi:10.1037/0278-6133.24.5.443
  • Kessler, R. C., Chiu, W. T., Demler, O., Merikangas, K. R., & Walters, E. E. (2005). Prevalence, severity, and comorbidity of twelve-month DSM-IV disorders in the National Comorbidity Survey Replication (NCS-R). Archive of General Psychiatry, 62(6), 617–627.
  • Kimberly, J. R., & Cook, J. M. (2008). Organizational measurement and the implementation of innovations in mental health services. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 11–20. doi:10.1007/s10488-007-0143-x
  • Kitson, A., & Straus, S. E. (2009). Identifying the knowledge-to-action gaps. In S. Strauss, J. Tetroe, & I. D. Graham (Eds.), Knowledge translation in health care: Moving from evidence to practice (pp. 60–72). Hoboken, NJ: Wiley–Blackwell.
  • Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21(4), 1055–1080. doi:10.5465/AMR.1996.9704071863
  • Kleinman, M. S., & Mold, J. W. (2009). Defining the components of the research pipeline. Clinical and Translational Science, 2(4), 312–314. doi:10.1111/j.1752-8062.2009.00119.x
  • Kluger, A. N., & DeNisi, A. (1996).The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychology Bulletin, 119, 254–284.
  • Kohl, P. L., Schurer, J., & Bellamy, J. L. (2009). The state of parent training: Program offerings and empirical support. Families in Society, 90(3), 247–254. doi:10.1606/1044-3894.3894
  • Kraemer, H. C., Mintz, J., Noda, A., Tinklenberg, J., & Yesavage, J. A. (2006). Caution regarding the use of pilot studies to guide power calculations for study proposals. Archive of General Psychiatry, 63(5), 484–489. doi:10.1001/archpsyc.63.5.484
  • Landsverk, J., Hendricks Brown, C., Chamberlain, P., Palinkas, L., Oghihara, M., Czaja, S., et al. (2012). Design and analysis in dissemination and implementation research. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 225–260). New York, NY: Oxford University Press.
  • Lanier, P., Kohl, P. L., Benz, J., Swinger, D., Moussette, P., & Drake, B. (2011). Parent–child interaction therapy in a community setting: Examining outcomes, attrition and treatment setting. Research on Social Work Practice, 21(6), 689–698. doi:10.1177/1049731511406551
  • Larson, E. (2004). A tool to assess barriers to adherence to hand hygiene guideline. American Journal of Infection Control, 32, 48–51.
  • Laske, O. (2004). Can evidence based coaching increase ROI? International Journal of Evidence Based Coaching and Mentoring, 2(2), 41–53.
  • Lee, B. R., Bright, C. L., Svoboda, D. V., Fakunmoju, S., & Barth, R. P. (2011). Outcomes of group care for youth: A review of comparative studies. Research on Social Work Practice, 21(2), 177–189. doi:10.1177/1049731510386243
  • Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health 31, 213–233. doi:10.1146/annurev.publhealth.012809.103625
  • Lindamer, L. A., Lebowitz, B., Hough, R. L., Garcia, P., Aguirre, A., Halpain, M. C., et al. (2009). Establishing an implementation network: Lessons learned from community-based participatory research. Implementation Science, 4(17), 1–7. doi:10.1186/1748-5908-4-17
  • Lobb, R., & Colditz, G. A. (2013). Implementation science and its application to population health. Annual Review of Public Health, 34, 235–251. doi:10.1146/annurev-publhealth-031912-114444
  • Luke, D. A. (2012). Viewing dissemination and implementation research through a network lens. In R. C. Brownson, G. A. Colditz, & E.K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 154–174). New York, NY: Oxford University Press.
  • Magnabosco, J. L. (2006). Innovations in mental health services implementation: A report on state-level data from the U.S. evidence-based practices project. Implementation Science, 1(13), 1–11. doi:10.1186/1748-5908-1-13
  • Mauskopf, J. A., Sullivan, S. D., Annemans, L., Caro, J., Mullins, C. D., Nuijten, M., et al. (2007). Principles of good practice for budget impact analysis: Report of the ISPOR task force on good research practices: Budget impact analysis. Values in Health, 10(5), 336–347.
  • Maynard, B. R., Tyson McCrea, K., Pigott, T. D., & Kelly, M. S. (2013). Indicated truancy interventions for chronic truant students: A Campbell systematic review. Research on Social Work Practice, 23(1), 5–21. doi:10.1177/1049731512457207
  • McDonald, K. M., Graham, I. D., & Grimshaw, J. (2004). Toward a theoretical basis for quality improvement interventions. In K. G. Shojania, K. M. McDonald, R. M. Watcher, & D. K. Owens (Eds.), Closing the quality gap: A critical analysis of quality improvement strategies (pp. 27–40). Rockville, MD: Agency for Healthcare Research and Quality.
  • McGlynn, E. A., Asch, S. M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A., et al. (2003). The quality of health care delivered to adults in the United States. New England Journal of Medicine, 348, 2635–2645. doi:10.1056/NEJMsa022615
  • McMillen, J. C., Proctor, E. K., Megivern, D., Striley, C., Cabassa, L., Munson, M., et al. (2005). Quality of care in the social services: Research agenda and methods. Social Work Research, 29(3), 181–191. doi:10.1093/swr/29.3.181
  • Megivern, D. A., McMillen, J. C., Proctor, E. K., Striley, C. W., Cabassa, L. J., & Munson, M. R. (2007). Quality of care: Expanding the social work dialogue. Social Work, 52(2), 115–124.
  • Meissner, H. I., Glasgow, R. E., Vinson, C. A., Chambers, D., Brownson, R. C., Green, L. W., et al. (2013). The US training institute for dissemination and implementation research in health. Implementation Science, 8(1), 12. doi:10.1186/1748-5908-8-12
  • Merikangas, K. R., He, J. P., Burstein, M., Swendsen, J., Avenevoli, S., Case, B., et al. (2011). Service utilization for lifetime mental disorders in U.S. adolescents: Results of the National Comorbidity Survey–Adolescent Supplement (NCS-A). Journal of American Academy Child and Adolescent Psychiatry, 50(1), 32–45. doi:10.1016/j.jaac.2010.10.006
  • Michie, S., Fixsen, D., Grimshaw, J. M., & Eccles, M. P. (2009). Specifying and reporting complex behaviour change interventions: The need for a scientific method. Implementation Science, 4(40), 1–6. doi:10.1186/1748-5908-4-40
  • Mishna, F., Cook, C., Saini, M., Wu, M. J., & MacFadden, R. (2011). Interventions to prevent and reduce cyber abuse of youth: A systematic review. Research on Social Work Practice, 21(1), 5–14. doi:10.1177/1049731509351988
  • Mittman, B. S. (2010). Criteria for peer review of D/I funding applications. St. Louis, MO: Implementation Research Institute.
  • Mullen, E. J., & Bacon, W. F. (2003). Practitioner adoption and implementation of practice guidelines and issues of quality control. In A. Rosen & E. K. Proctor (Eds.), Developing practice guidelines for social work intervention: Issues, methods, and research agenda (pp. 223–235). New York, NY: Columbia University Press.
  • Mullen, E. J., & Bacon, W. F. (2004). Implementation of practice guidelines and evidence-based treatment: A survey of psychiatrists, psychologists, and social workers. In A. R. Roberts & K. R. Yeager (Eds.), Evidence-based practice manual: Research and outcome measures in health and human services. New York, NY: Oxford University Press.
  • Mullen, E. J., Bledsoe, S. E., & Bellamy, J. L. (2008). Implementing evidence-based social work practice. Research on Social Work Practice 18(4), 325–338. doi:10.1177/1049731506297827
  • National Institutes of Health (NIH). (2010). Dissemination and implementation research in health (R01). Retrieved from
  • National Institutes of Health (NIH). (2011, August 1–5). Training in Dissemination and Implementation Research in Health conference, Chapel Hill, NC.
  • National Institutes of Health (NIH). (2013). Dissemination and implementation research in health (R01). Retrieved January 30, 2013, from
  • Ohmer, M. L., & Korr, W. S. (2006). The effectiveness of community practice interventions: A review of the literature. Research on Social Work Practice, 16(2), 132–145. doi:10.1177/1049731505282204
  • Oxman, A. D. (2004). Grading quality of evidence and strength of recommendations. Biomedical Journal, 328, 1490–1494.
  • Palinkas, L. A., Aarons, G. A., Horwitz, S. M., Chamberlain, P., Hurlburt, M., & Landsverk, J. (2011a). Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 44–53. doi:10.1007/s10488-010-0314-z.PMCID: PMC3025112
  • Palinkas, L., Aarons, G., Horwitz, S. M., Chamberlain, P., Hurlburt, M. S., & Landsverk, J. (2011b). Mixed methods designs in mental health services research: A review. Psychiatric Services, 62(3), 255–263. doi:10.1176/
  • Palinkas, L. A., Schoenwald, S. K., Hoagwood, K., Landsverk, J., Chorpita, B. F., Weisz, J. R., & the Research Network on Youth Mental Health. (2008). An ethnographic study of implementation of evidence-based treatment in child mental health: First steps. Psychiatric Services, 59(7), 738–746. doi:10.1176/
  • Pangaea Global AIDS Foundation. (2009, July 23–24). Report from the expert consultation on implementation science research: A requirement for effective HIV/AIDS prevention and treatment scale-up. Cape Town, South Africa. Retrieved from
  • Powell, B. J., McMillen, J. C., Hawley, K. M., & Proctor, E. K. (2013). Mental health clinicians’ motivation to invest in training: Results form a practice-based research network survey. Psychiatric Services, 64(8), 816–818. doi:10.1176/
  • Powell, B. J., McMillen, J. C., Proctor, E. K., Carpenter, C. R., Griffey, R. T., Bunger, A. C., et al. (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review, 69(2), 123–157. doi:10.1177/1077558711430690
  • Prochaska, J. O., & Velicer, W. F. (1997). The transtheoretical model of health behavior change. American Journal of Health Promotion, 12, 38–48.
  • Proctor, E. K. (2004). Leverage points for the implementation of evidence-based practice. Brief Treatment and Crisis Intervention, 4(3), 227–242.
  • Proctor, E. K. (2007). Implementing evidence-based practice in social work education: Principles, strategies, and partnerships. Research on Social Work Practice, 17(5), 583–591. doi:10.1177/1049731507301523
  • Proctor, E. K., Knudsen, K. J., Fedoravicius, N., Hovmand, P., Rosen, A., & Perron, B. (2007). Implementation of evidence based practice in behavioral health: Agency director perspectives. Administration and Policy in Mental Health and Mental Health Services, 34, 479–488. doi:10.1007/s10488-007-0129-8
  • Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34. doi:10.1007/s10488-008-0197-4
  • Proctor, E. K., Landsverk, J., Baumann, A. A., Mittman, B. S., Aarons, G. A., Brownson, R. C., et al. (2013). The implementation research institute: Training mental health implementation researchers in the United States. Implementation Science, 8, 105.
  • Proctor, E. K., Powell, B. J., Baumann, A. A., Hamilton, A. M., & Santens, R. L. (2012). Writing implementation research grant proposals: Ten key ingredients. Implementation Science, 7, 96. doi:10.1186/1748-5908-7-96
  • Proctor, E. K., Powell, B. J., & McGinnis, H. A. (2012). Implementation science and practice. New York, NY: Oxford University Press.
  • Proctor, E. K., Powell, B., & McMillen, J. C. (2013). Implementation strategies: Recommendations for specifying and reporting. Implementation Science, 8, 139.
  • Proctor, E. K., & Rosen, A. (2008). From knowledge production to implementation: Research challenges and imperatives. Research on Social Work Practice, 18(4), 285–291. doi:10.1177/1049731507302263
  • Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., et al. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. doi:10.1007/s10488-010-0319-7
  • Rabin, B. A., Purcell, P., Naveed, S., Moser, R. P., Henton, M. D., Proctor, E. K., et al. (2012). Advancing the application, quality and harmonization of implementation science measures. Implementation Science, 7, 119. doi:10.1186/1748-5908-7-119
  • Raghavan, R. (2012). The role of economic evaluation in dissemination and implementation research. In R. Brownson, G. Colditz, & E. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 94–113). New York, NY: Oxford University Press.
  • Raghavan, R., Bright, C. L., & Shadoin, A. L. (2008). Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science, 3(26), 1–9. doi:10.1186/1748-5908-3-26
  • Ragins, B. R., & Kram, K. E. (Eds.). (2007). The handbook of mentoring at work: Theory, research, and practice. Thousand Oaks, CA: Sage.
  • Rapp, C. A., Etzel-Wise, D., Marty, D., Coffman, M., Carlson, L., Asher, D., et al. (2008). Evidence-based practice implementation strategies: Results of a qualitative study. Community Mental Health Journal, 44(3), 213–224. doi:10.1007/s10597-007-9109-4
  • Redman, R. W. (2006). Leadership strategies for uncertain times. Research and Theory for Nursing Practice, 20(4), 273–275.
  • Rogers, E. M. (2003). Diffusion of innovations (5th ed). New York, NY: Free Press.
  • Rosen, A., & Proctor, E. K. (Eds.). (2003). Developing practice guidelines for social work intervention. New York, NY: Columbia University Press.
  • Rosen, A., Proctor, E. K., & Staudt, M. (1999). Social work research and the quest for effective practice. Social Work Research, 23(1), 4–14. doi:10.1093/swr/23.1.4
  • Roth, A., & Fonagy, P. (2005). What works for whom? A critical review of psychotherapy research. New York, NY: Guilford.
  • Saldana, L., Chamberlain, P., Bradford, W. D., Campbell, M., & Landsverk, J. (2013). The cost of implementing new strategies (COINS): A method for mapping implementation resources using the stages of implementation completion. Children and Youth Services Review. Advance online publication. doi:10.1016/j.childyouth.2013.10.006
  • Saldana, L., Chamberlain, P., Wang, W., & Brown, C. H. (2012). Predicting program start-up using the stages of implementation measure. Administration and Policy in Mental Health and Mental Health Services, 39(6), 419–425. doi 10.1007/s10488-011-0363-y
  • Shortell, S. M. (2004). Increasing value: A research agenda for addressing the managerial and organizational challenges facing health care delivery in the United States. Medical Care Research Review, 61(3), 12S–30S. doi:10.1177/1077558704266768
  • Shumway, M., Saunders, T., Shern, D., Pines, E., Downs, A., Burbine, T., et al. (2003). Preferences for schizophrenia treatment outcomes among public policy makers, consumers, families, and providers, Psychiatric Services, 54, 1124–1128.
  • Solberg, L. I., Brekke, M. L., Fazio, C. J., Fowles, J., Jacobsen, D. N., Kottke, T. E., et al. (2000). Lessons from experienced guideline implementers: Attend to many factors and use multiple strategies. Journal on Quality Improvement, 26(4), 171–188.
  • Soydan, H. (2008). Applying randomized controlled trials and systematic reviews in social work research. Research on Social Work Practice, 18(4), 311–318. doi:10.1177/1049731507307788
  • Soydan, H., Mullen, E. J., Alexandra, L., Rehnman, J., & Li, Y-P. (2010). Evidence-based clearinghouses in social work. Research on Social Work Practice, 20(6), 690–700. doi:10.1177/1049731510367436
  • Stetler, C. B., Mittman, B. S., & Francis, J. (2008). Overview of the VA quality enhancement research initiative (QUERI) and QUERI theme articles: QUERI series. Implementation Science, 3(8), 1–9. doi:10.1186/1748-5908-3-8
  • Substance Abuse and Mental Health Services Administration. (2010). Results from the 2009 National Survey on Drug Use and Health: Volume I. Summary of national findings (Office of Applied Studies, NSDUH Series H-38A, HHS Publication No. SMA 10-4856 Findings). Rockville, MD: Author.
  • Sullivan, G., Duan, N., Mukherjee, S., Kirchner, J., Perry, D., & Henderson, K. (2005). The role of services researchers in facilitating intervention research. Psychiatric Services, 56, 537–542.
  • Sutphen, R. D., Ford, J. P., & Flaherty, C. (2010). Trauncy interventions: A review of the research literature. Research on Social Work Practice, 20(2), 161–171. doi:10.1177/1049731509347861
  • Tabak, R. G., Khoong, E. C., Chambers, D. A., & Brownson, R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventative Medicine, 43(3), 337–350. doi:10.1016/j.amepre.2012.05.024
  • Valente, T. W. (1995). Network models of the diffusion of innovation. Cresskill, NJ: Hampton Press.
  • Valente, T. W. (2012). Network interventions. Science, 337(6090), 49–53. doi:10.1126/science.1217330
  • Varvasovszky, Z., & Brugha, R. (2000). How to do (or not to do) a stakeholder analysis. Health Policy Plan, 15(3), 338–345. doi:10.1093/heapol/15.3.338
  • Wallerstein, N., & Duran, B. (2010). Community-based participatory research contributions to intervention research: The intersection of science and practice to improve health equity. American Journal of Public Health, 100(S1), S40–S46. doi:10.2105/AJPH.2009.184036
  • Wang, P. S., Lane, M., Olfson, M., Pincus, H. A., Wells, K. B., & Kessler, R. C. (2005). Twelve-month use of mental health services in the United States: Results from the National Comorbidity Survey Replication. Archive of General Psychiatry, 62(6), 629–640. doi:10.1001/archpsyc.62.6.629
  • Webster-Stratton, C. (1984). A randomized trial of two parent-training programs for families with conduct-disordered children. Journal of Consulting and Clinical Psychology, 52(4), 666–678. doi: 10.1037/0022-006X.52.4.666
  • Webster-Stratton, C. (1998). Preventing conduct problems in Head Start children: Strengthening parenting competencies. Journal of Consulting and Clinical Psychology, 66(5), 715–730. doi:10.1037/0022-006X.66.5.715
  • Weissman, M. M., Verdelim, H., Gameroff, M. J., Bledsoe, S. E., Betts, K., Mufson, L., et al. (2006). National survey of psychotherapy training in psychiatry, psychology, and social work. Archive of General Psychiatry, 63, 925–934.
  • Wensing, M., Bosch, M., & Grol, R. (2009). Selecting, tailoring, and implementing knowledge translation interventions. In S. Straus, J. Tetreow, & I. D. Graham (Eds.), Knowledge translation in health care: Moving from evidence to practice (pp. 94–113). Oxford, UK: Wiley-Blackwell.
  • Wensing, M., & Grol, R. (2005). Methods to identify implementation problems. In R. Grohl, M. Wensing, & M. Eccles (Eds.), Improving patient care: The implementation of change in clinical practice (pp. 109–120). Edinburgh: Elsevier.
  • Wensing, M., Weijden, T. V. D., & Grol, R. (1998). Implementing guidelines and innovations in general practice: Which interventions are effective? British Journal of General Practice, 48, 991–997.
  • Westfall, J. M., Mold, J., & Fagnan, L. (2007). Practice-based research: “Blue Highways” on the NIH roadmap. Journal of American Medical Association, 297, 403–406. doi:10.1001/jama.297.4.403
  • Wiltsey Stirman, S., Kimberly, J., Cook, N., Calloway, A., Castro, F., & Charns, M. (2012). The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implementation Science, 7(17). doi:10.1186/1748-5908-7-17
  • Woltmann, E. M., Whitley, R., McHugo, G. J., Brunette, M., Torrey, W. C., Coots, L., et al. (2008). The role of staff turnover in the implementation of evidence-based practices in mental health care. Psychiatric Services, 59(7), 732–737.
  • Yancy, A., Glenn, B. A., Bell-Lewis, L., & Ford, C. L. (2012). Dissemination and implementation research in populations with health disparities. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 459–482). New York, NY: Oxford University Press.
  • Zayas, L. H., Bellamy, J. L., & Proctor, E. K. (2012). Considering the multiple service contexts in cultural adaptations of evidence-based practice. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 483–497). New York, NY: Oxford University Press.