Program Implementation Frameworks
Abstract and Keywords
This entry presents frameworks for implementing effective services. When service organizations understand and work through implementation frameworks, programs can achieve targeted fidelity and client outcomes in a sustainable manner while enhancing practitioner competence and confidence, and improving organizational culture and climate. These frameworks should be but are not yet infused throughout social work curricula. They provide a practical and conceptual bridge for supporting effective delivery of evidence-based or empirically informed practices.
Evidence-based practice emerged from the field of medicine in the 1990s (Sackett et al., 1996; Straus, Glasziou, Richardson, & Haynes, 2011), sparking debates between its proponents and opponents in social work (Howard, McMillen, & Pollio, 2003; Rubin & Parrish, 2007). Extending across disciplines, a key concern in this debate was how to ensure the accurate uptake of research into practice settings. In addition to the examination of professional and organizational behavior, Eccles and Mittman (2006) defined implementation science as the study of means to promote the systematic uptake of research findings to ensure the quality and effectiveness of service.
Prior to 2005, there was little consensus about the organizational infrastructure necessary to achieve and sustain program fidelity and client outcomes. However, in that year, the National Implementation Research Network (NIRN) published a seminal review of over three decades of empirical studies that identified what contributes to improved products and services in corporate business, agribusiness, hospital administration, medical and nursing services, social services, education, and other disciplines. From these sources, they identified three overarching and integrated frameworks: intervention components, implementation components, and stages of implementation (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). NIRN’s monograph offered a means to synthesize discussions about model fidelity and outcomes, as well as program development and implementation across federal, state, and local initiatives (Bertram, Blase, Shern, Shea, & Fixsen, 2011). These frameworks provided a common focus and emerged in near concurrence with a respected new journal (Implementation Science). This synergy of efforts was well represented at the initial biennial Global Implementation Conference that attracted over 800 participants to Washington, DC from every continent but Antarctica in August 2011 (Bertram, Blase, & Fixsen, 2013). This entry suggests how program administrators can think through and apply implementation frameworks. These frameworks have been taught in specific graduate social work courses but can and should be infused throughout social work curricula (Bertram, King, Geary & Nutt, in press).
Intervention Components, Program Implementation, and Social Work Curricula
Before providing service, target population characteristics and selection of practice model should receive careful program attention. The necessary participants, elements, activities, and phases of a practice model should be carefully considered. How will they improve client context and outcomes or address client needs? Matching service model to population needs is an emphasis in social work, especially in discussions about evidence-based practice (Howard et al., 2003; Mullen, Bledsoe, & Bellamy, 2008).
NIRN’s intervention component framework includes these two concerns and three more (Fixsen et al., 2005). An administrative focus through the five NIRN intervention components provides a sound foundation for exploration, purposeful selection, clarification, improvement, and systematic implementation of a program’s practice model (Bertram, Blase, & Fixsen, 2013; Fixsen, 2005). This framework includes (a) model definition (who should be engaged, how, and in what elements, activities, and phases of service delivery); (b) theory bases(s) supporting those elements and activities; (c) target population characteristics (behavioral, cultural, socioeconomic, and other factors that suggest a good match with the practice model); (d) theory of change (how those elements and activities create improved outcomes for the target population); and (e) alternative models (a rationale for why the service organization therefore rejects using other practice models).
However, this exploration and the subsequent decision to adopt a practice model will not ensure an organization’s ability to achieve and sustain improved outcomes for its target population. All too often, clinicians are allowed to independently select and apply elements of a promising or evidence-based practice for each client. This will not ensure improved client outcomes, nor will such outcomes be sustained if that practitioner leaves the organization. At the heart of implementation science is the explicit understanding that selecting an evidence-based practice, an empirically supported intervention, or other practice model is but the first step toward effective program implementation. Organizations must adjust specific component activities and infrastructure to support any practice model (Bertram, Blase, Shern, et al., 2011; Bertram, King, et al., in press; Bertram, Suter, Bruns, & Orourke, 2011; Fixsen et al., 2005; Mullen et al., 2008). Without these adjustments, implementation of even tested practice models may lack fidelity and prove ineffective, inefficient, and unsustainable (Henggeler, Pickrel, & Brondino, 1999; see Figure 1).
Unfortunately, knowledge of NIRN frameworks and the ability to critically examine, choose, and implement an evidence-based practice model may not yet be a consistent product of graduate academic programs (Barwick, 2011; Bertram, King, et al., in press). Evidenced-based or evidence-informed practice requires social workers to be able to find literature about practice models to guide their selection and implementation (Aarons & Sawitzky, 2006; Bellamy, Bledsoe, Mullen, Fang, & Manuel, 2008; Howard et al., 2003; Manuel, Mullen, Fang, Bellamy, & Bledsoe, 2009). However, students often enter graduate studies with a primary desire to provide counseling in private practice, and they are not interested in seeking data to inform such practice or to evaluate programs in which they are employed (D’Aprix, Dunlap, Abel, & Edwards, 2004; Green, Bretzin, Leininger, & Stauffer, 2001).
Social work curricula may not be sufficiently challenging these perspectives. Often research courses in master of social work (MSW) programs primarily teach single-subject case studies (Hardcastle & Bisman, 2003; Rubin, Robinson, & Valutis, 2010) rather than conceptual frameworks bridging theory, practice, policy, research, and program evaluation. Instead of seeking data and literature about a target population to compare with key elements, activities, and outcomes of a promising practice, an empirically informed, or an evidence-based practice model, social workers often rely upon overview texts from their practice or human behavior courses, or they seek guidance from peers (Bertram, King, et al., in press; Howard et al., 2003; Smith, Cohen-Callow, Hall, & Hayward, 2007).
This phenomenon is not limited to graduates of social work programs. A recent survey asked supervisors and administrators (n = 589) of agencies serving youth and families in Canada and the United States how their masters-level clinicians with degrees in social work, psychology, or counseling learn evidence-based practice (Barwick, 2011). Most of the responding supervisors and administrators (73%) believed that evidence-based practice was important but noted that the necessary research and analytical skills, including the appraisal and use of empirical literature, were abilities developed in the work setting rather than in graduate studies.
Finally, some practitioners believe that specification of key elements, activities, and phases of a service model, and even measures of fidelity constrain the creativity they believe is needed to provide effective, individualized service (Addis, Wade, & Hatgis, 1999). Roberts and Yeager (2004) suggest that evidence-based practices are viewed with skepticism by academics who perceive the emphasis on empirical testing and use of specifically defined models as reductionist.
Model Definition and Theory Base(s)
The previous examples highlight factors contributing to the frequently discussed gap between research and practice (Institute of Medicine, 2006) that was recently addressed through a two-day symposium engaging prominent published voices from social work academic and service programs (University of Houston Graduate College of Social Work, 2013). However, as discussed in that symposium, implementation frameworks can help bridge this gap, and programs and practitioners can immediately benefit from thinking through the NIRN intervention component framework (Bertram, Blase, & Fixsen, 2013). There should be a clear rationale for who is engaged in the elements, activities, and phases of a practice model. A service organization can clarify and describe these elements, activities, and phases in a program manual that reinforces training and guides practitioners, their supervisors, and coaches (Fixsen et al., 2005).
This rationale for engaging participants in the specified elements, activities, or phases of a program model may have multiple theory bases. When this is so, it is important that the theory bases complement or are congruent with each other. Who is engaged, the focus and process of assessment, and how interventions are selected and designed should all focus through and be congruent with a practice model’s theory base(s) and should be a good match to client context or characteristics (Bertram, Blase, & Fixsen, 2013). For example, multisystemic therapy (MST) clearly and explicitly embraces ecological systems theory (Bronfenbrenner, 1979) as a unifying theory base. This theory base supports a search for multiple contributing factors to change in the youth, family, peers, school, and community that shape youth behavior. This clarity enables MST purveyors to train clinicians efficiently and effectively, as well as to pursue specific case data to evaluate model fidelity and inform staff coaching and development (Henggeler, Schoenwald, Borduin, Rowland, & Cunningham, 2009).
But all too often programs or practitioners assert that they use an eclectic approach based upon each client’s needs. For example, a review of evaluations of program implementation at 34 MSW student field placement sites in Kansas City identified that staff often suggested that a systems theory construct such as “person-in-environment” shaped assessments and interventions. However, these respondents also asserted that constructs from individual psychodynamic theory such as projection and transference were additional bases for assessment and intervention activities. Invariably, when there were incongruent or unclear theory bases, the practice model was not well defined or supported (Bertram, King, et al., in press).
Target Population Characteristics and Alternative Models
Careful consideration of target population characteristics demands more than identification of service demographics. Age, gender, ethnicity and race, socioeconomic and cultural factors, behaviors of concern, and multisystem engagement and organizational factors should all shape selection or rejection of a practice model. In considering the match (or lack thereof) between these factors and the key elements, activities, and phases of a practice model, the social work practitioner or service organization addresses another NIRN intervention component, alternative models (and why they were rejected). Given a set of characteristics in the target population, there should be consciously chosen reasons for attempting to provide a particular practice, and just as conscious a rationale for choosing not to provide a different model (Bertram, Blase, Shern, et al., 2011; Fixsen et al., 2005). For example, individual psychotherapy is not a productive practice with gang affiliating youth. Their aggressive or substance-using behaviors are shaped by interactions between and within the community, family, school, and youth peer groups. Removal of such youth from prosocial peer interactions and placing them in restrictive program settings with similar antisocial peers tends to reenforce antisocial behavior (Henggeler et al., 2009).
Model Definition,Target Population Characteristics, and Theory of Change
Target population characteristics should also be considered in light of a program’s theory of change. How do key program activities and elements align with or contribute to desired improvements in the client’s context or to diminishing the behaviors of concern of the target population? How will engaging essential participants in assessment, as well as in the design and delivery of interventions, diminish or eliminate factors contributing to those behaviors of concern? If implemented with fidelity, how will these participants, elements, and activities produce improved client outcomes?
Careful consideration of model definition and target population characteristics in thinking through the theory of change should comprise initial steps taken by administrators and communities before they seek to secure or commit funds and resources to making adaptions to a current practice model or to installation of a new program (Bertram, Blase, Shern, et al., 2011). This process of thinking through NIRN’s intervention component framework should also serve as an integrative focus across administrative, theory, research, policy, and practice courses in social work curricula. As will be seen in the next section describing NIRN implementation components, the framework of intervention components informs administrative and organizational decisions regarding selection, training, and coaching of staff, and ultimately performance assessments of fidelity and attention to target population outcomes.
Implementation Drivers: Organizational Change and Social Work Curricula
Program intervention components should always be fully considered and supported. Too often evidence-based or evidence-informed practices suffer when practitioners or programs apply only certain elements or activities, thus ignoring that research informing the practice and establishing its effectiveness focused not on techniques or single constructs, but rather on delivery with fidelity of the entire practice model (Bertram, King, et al., in press). Service delivery can improve if all of the elements, activities, and phases that define the practice model and their supporting theory bases(s) inform organizational adjustments focused through the second NIRN framework, implementation drivers (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Fixsen, Blase, Naoom, & Wallace, 2009). Active consideration of this framework can also support horizontal and vertical integration of theory, practice, administration, policy, and research courses in social work curricula.
Implementation drivers (see Figure 2) are essential components of organizational infrastructure that support high-fidelity, effective, sustainable programs (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Blase, Van Dyke, Fixsen & Bailey, 2012). Competency drivers develop the competence and confidence of practitioners through model-pertinent staff selection, training, coaching, and performance assessment. Organization drivers establish the administrative, funding, policy, and procedural environments to ensure that competency drivers are consistent, integrated, accessible, and effective. They also establish and support continuous quality monitoring and improvement feedback loops while attending to client outcomes (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Blase, et al., 2012; Fixsen, Blase, Naoom, & Wallace, 2009). Depending upon circumstances, adjusting or developing competency and organization drivers requires different types of leadership to discriminate between complex and technical challenges to apply appropriate leadership strategies and expertise. In NIRN’s implementation framework, these are called leadership drivers (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Heifetz & Laurie, 1997).
While many components of each of these drivers usually exist in organizations, they must be thoughtfully repurposed and integrated to promote effective implementation of a practice model. This begins with thinking through the intervention components. Then implementation drivers should be adjusted so they function in an integrated and compensatory manner. Organized in this way, weakness or limitations in one driver can then be addressed by other drivers (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011). For example, to foster staff competence and confidence, model-pertinent, data-informed coaching may compensate for limited training funds or opportunities. Finally, when implementation drivers are carefully considered and adjusted to support a program’s practice model, organizational culture and climate will gradually be reshaped (Bertram, Schaffer & Charnin, in press; Fixsen, Blase, Naoom, & Wallace, 2009; Kimber, Barwick, & Fearing, 2012).
Competency drivers (see Figure 2) promote model-pertinent competence and confidence so that high-fidelity and improved client outcomes occur and are sustainable. By considering staff selection, training, coaching, and performance assessment in light of the practice model’s intervention components, competency drivers can function in an integrated and compensatory manner with each other and with other implementation drivers (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
For example, administrators can think through and social work educators can teach students to consider model definition, theory base(s) target population characteristics, and theory of change, as an integrative basis to identify the knowledge, skills, and aptitude needed to deliver a practice model. But even with the best criteria to select staff, most candidates for a position won’t have a fully developed set of model-pertinent knowledge and skills. Preservice training can begin to enhance staff abilities, and each participant will develop different degrees of proficiency. If the focus of subsequent coaching is integrated with that training and is informed by model-pertinent case data, then posttraining limitations in knowledge and skill can be addressed (Barwick, 2011; Bertram, Suter, et al., 2011; Kimber et al., 2012). All competency drivers should target selection and enhancement of the knowledge, skills, and aptitude needed to implement the program’s practice model effectively with fidelity (performance assessment). Thus, the performance assessment driver also functions as a measure of how well the implementation drivers are functioning to promote competence and confidence, and to the extent that is so, organization climate and culture improve (Bertram, Schaffer, et al., in press; Bertram, Blase, Shern, et al., 2011; Kimber et al., 2012).
Staff selection is often not discussed and is seldom evaluated in the literature (Fixsen et al., 2005). A review of over two decades of literature addressing the wraparound model focused through the 2005 NIRN implementation frameworks, but no publications were found regarding staff selection criteria or processes (Bertram, Suter, et al., 2011). In a review of evaluations of program implementation at 34 service sites in or near Kansas City, the most common criteria used by programs in selecting staff was educational background and/or licensure. Only a few sites sought staff with experience, knowledge, or aptitude for engaging the target population, and most of these were psychiatric settings (Bertram, King, et al., in press). While it may be necessary to select licensed staff to meet insurance or funding requirements, it is critically important that staff selection criteria also seek model-pertinent or target-population-specific knowledge, skills, and aptitude (Bertram, Blase, & Fixsen, 2013; Fixsen, Blase, Naoom, & Wallace, 2009).
Some model-pertinent attributes may not be easily developed through training or coaching and therefore must be part of predetermined hiring or selection criteria. For example, compassion, patience, and comfort with cognitive, verbal, or behavioral challenges of a developmentally disabled client might be a prerequisite for staff selection at an independent living placement or for a work setting serving this population. Comfort with diverse and conflicting professional perspectives might be a criterion for a team facilitation role in multidisciplinary responses to domestic violence or child abuse (Bertram, Blase, Shern, et al., 2011; Fixsen, Blase, Naoom, & Wallace, 2009).
Effective and sustainable implementation of any program requires behavior change in service providers, their supervisors, and administrators. When practitioners, supervisors, and other staff are carefully selected, training and coaching can more efficiently drive and enhance this behavior change. Training should develop shared knowledge of population characteristics and context, as well as the rationale for applying a specific program model. Program participants, elements, activities, phases, theory base(s), and theory of change should be understood throughout the organization. Finally, effective training should provide opportunities to practice model-pertinent skills and activities while receiving constructive feedback in a safe environment (Bertram, Blase, & Fixsen, 2013; Blase et al., 2012). Organizational outcomes (also called implementation outcomes) related to this competency driver are measurable. Through careful consideration of intervention components, an organization can evaluate pre- and posttraining changes in staff knowledge and skills. These data provide baseline information for subsequent individualized coaching toward further staff development. Integrated examination of such data with fidelity performance assessments can then guide administrative evaluation of the training and coaching drivers (Bertram, Blase, Shern, et al., 2011; Bertram, Schaffer, et al., in press; Bertram, Suter, et al., 2011; Kimber et al., 2012).
In addition to promoting knowledge and skill development, training supports improved staff investment in and understanding of the program model. However, increasingly competent and confident use of any service model requires skillful on-the-job coaching (Ager & O’May, 2001; Denton, Vaughn, & Fletcher, 2003; Schoenwald, Sheidow, & Letourneau, 2004). Coaching is most effective when it uses multiple forms of information in an improvement cycle loop (for example, observe, coach, data feedback, plan, reobserve). Coaching should always include some form of direct observation (for example, in-person, audio, video) to accurately assess and develop staff skills and judgment. Best practices in coaching include developing and adhering to the formats, frequency, and focus delineated in a coaching plan, as well as ensuring that supervisors and coaches are well selected, trained, coached, and held accountable for enhancing staff development (Bertram, Blase, Shern, et al., 2011; Henggeler et al., 2009; Schoenwald, Brown, & Henggeler, 2000).
Unfortunately, many organizations confuse supervision with coaching. For example, in a review of evaluations of program implementation at 34 service sites in or near Kansas City, most respondents indicated that coaching or supervision activities were ad hoc, not systematic, not data informed or focused upon enhancement of model-pertinent knowledge and skills. Instead, supervision focused upon risk containment or harm reduction in the most problematic cases, while also addressing bureaucratic administrative concerns. Further diminishing staff development and often in lieu of a well-considered coaching plan, many respondents in these organizations proudly noted they offered opportunity to take leave for additional training to earn continuing education credits toward licensure (Bertram, King, et al., in press). Such approaches to staff development ignore the fact that to develop model-specific staff confidence and competence training alone is insufficient (Fixsen, Blase, et al., 2009; Schoenwald et al., 2004).
Creating competent, model-pertinent practitioner performance is the responsibility of the service organization. As a driver of effective, sustainable program implementation, performance assessment should examine two forms of model fidelity. One is related to practitioner enactment of key elements, activities, and phases of the program model. Case-specific measures of model fidelity are necessary to evaluate how well the competency drivers of staff selection, training, and coaching are operating (Schoenwald et al., 2004).
The second type of fidelity that should be routinely examined is organizational performance in each of the implementation drivers. For example, is training provided as planned and intended? Are pre- and posttraining tests of model-pertinent and population-specific knowledge and skills informing individualized coaching plans? Does that coaching occur as scheduled? How does it reenforce training content? How well and how frequently is coaching informed by model-pertinent case data and by observations of practice? With such data the fidelity and effectiveness of staff selection, training, and coaching can be assessed. These data may suggest specific adjustments to policy or procedure, as well as systems-level factors requiring attention because they constrain achieving model fidelity or desired client outcomes. Performance assessment informs continuous quality improvement of both organization drivers and competency drivers, as purveyors, administrators, supervisors, and practitioners use implementation data to guide staff and program development (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Bertram, Schaffer, et al., in press; Schoenwald et al., 2004; Kimber et al., 2012).
When performance assessment measures demonstrate that staff selection, training, or coaching are integrated and functioning with fidelity, then the organization drivers of facilitative administration, data support, and systems-level intervention are probably model focused and well integrated (Bertram, Blase, & Fixsen, 2013). Improved organizational culture and climate (Aarons & Sawitzky, 2006; Kimber et al., 2012) emerge when model-pertinent, integrated organization drivers support and sustain effective use of competency drivers that systematically review performance assessment and outcomes data for continuous quality improvement (see Figure 3). Finally, consideration and adjustments of these organization drivers prior to service delivery provide administrators and the community with a practical assessment of agency- and system-level readiness to deliver and sustain a new program or a better defined practice model (Bertram, Blase, Shern, et al., 2011; Bertram, Schaffer, et al., in press; Fixsen, Blase, et al., 2009). The integrated and compensatory nature of organization and competency drivers offers a clear framework for social work curricula, particularly in courses focused upon program administration and evaluation and staff supervision.
To provide clients with high-quality, effective service through a carefully selected practice model, administrators must be proactive. They should begin with identification of desired outcomes and work backward through NIRN frameworks to facilitate organizational change. Working within and through intervention and implementation frameworks, the goal of facilitative administration should be to adjust work conditions to accommodate and support new functions needed to implement the practice model effectively with fidelity. This begins with exploration and assessment of target population or community needs and organizational capacity to implement the program. Activities related to this organization driver specifically focus on what is required to implement the chosen model effectively with fidelity to sustain implementation over time and through turnover in practitioners and administrations (Fixsen, Blase, et al., 2009). During program installation, existing policies, procedures, and data support systems must receive close scrutiny. Are they appropriate for the practice model? Are there adequate human and technical resources and how might they be repurposed or reorganized to best effect? (Bertram, Blase, Shern, et al., 2011).
However, administrators can also think through implementation frameworks to correct and improve current services as occurred at the SAMHSA Children’s Mental Health Initiative grant site in Houston, Texas. A participatory evaluation of program implementation conducted by a team of family members, supervisors, administrators, and an implementation consultant identified multiple factors compromising wraparound model fidelity. Job descriptions, caseload size, training content, coaching, and decision support data systems required model-pertinent repurposing and integration. Given wraparound’s elements and activities, caseloads were too large and were administratively reduced from twenty to a more manageable eight cases per wraparound care coordinator. Two key position descriptions were nearly alike and resembled case management descriptions used in the organization’s other programs. These position responsibilities were clarified and rewritten. Supervisor responsibilities were reorganized so all staff working with the same family would receive coaching from the same supervisor rather than through the organization’s accustomed structure of providing a different supervisor for each type of position. Revised training clarified and operationalized theory bases supporting wraparound elements and activities. Case data forms were revised to reenforce new training content while informing a systematic approach to staff development through regularly scheduled coaching rather than ad hoc, risk containment supervision. Bi-weekly Skype review of these data by the consultant, supervisors, and administrators identified subsequent implementation patterns and guided further adjustments to the focus, frequency, and formats of coaching. After 18 months of these integrated organizational changes, both Wraparound Fidelity Index scores (WFI-4) and target population outcomes improved to above the national mean (Bertram, Schaffer, et al., in press).
The Houston experience also provides a good example of the integrated and compensatory nature of competency and organization drivers. When organized in this manner, practice can inform policy and policy can then be adjusted to enhance practice. In implementation frameworks this is called practice-informed policy (PIP) and (policy-enabled practice (PEP) cycles of information and change. In these informational cycles, administrators initially track fidelity and outcome data to identify and correct model drift. They seek and respond to feedback directly provided from the practice level regarding fidelity constraining or facilitating factors to both implementation outcomes and to target population outcomes (Bertram, Blase, Shern, et al., 2011). Transparent, responsive PIP and PEP feedback loops manifest continuous quality improvement through repeated cycles of planning, doing, evaluating, adjusting, evaluating, and engaging in new plans to make further improvements. In so doing, facilitative administration reshapes organizational culture and climate to focus upon and actively support the achievement and sustainability of improved implementation fidelity and target population outcomes (Kimber et al., 2012). Later, when benchmarks for fidelity and outcomes are consistently achieved, the PIP and PEP cycles help administrators identify and facilitate development and testing of useful adaptations to the program’s practice model (Blase et al., 2012; Schoenwald et al., 2004).
Program implementation unfolds in a changing context of federal, state, community, and organizational factors, each of which may be shaped by changing cultural, socioeconomic, or political concerns. These factors unfold unevenly with differing effect and may constrain a program’s ability to achieve desired fidelity or client outcomes. When these factors align in a constraining manner that compromises program fidelity, outcomes, or sustainability, administrators must engage decision makers at a systems level to build consensus on the nature of the challenge and how to address it (Bertram, Blase, Shern, et al., 2011; Fixsen, Blase, et al., 2009; Fixsen, et al., 2005).
An excellent example of the systems-level intervention driver in action was reported in Kansas City, where administrators and supervisors from multiple systems engaged in the Missouri’s Children’s Division family support team model jointly examined contributing factors to its diminished fidelity (Bertram, King, et al., in press). A similar process had previously occurred in Kansas City’s multisystem investigation of and response to reports of child sexual abuse (Bertram, 2008). Analysis of constraining or supporting factors influencing program fidelity and target population outcomes is the responsibility of a vigilant facilitative administration that identifies and intervenes at the systems level. Influential persons from each system must be engaged to create consensus on the nature of the challenge, then to facilitate and sustain adjustments to policies, practices, or funding mechanisms so that a program model can be implemented with fidelity and achieve desired outcomes (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
Decision Support Data System
Model-pertinent data to guide decisions about organizational and staff performance, as well as about client outcomes, are essential for continuous quality improvement through PIP and PEP information cycles. These data help sustain the program model. As an organizational driver of implementation, the decision support data system should provide timely and valid information related to model fidelity for correlation with outcomes data. Data should be easily accessible and understandable for use by purveyors, administrators, supervisors, and staff to support evaluation and development of staff competencies as well as continual organizational quality improvement. Data systems truly become decision support data systems when model-pertinent information is understandable and readily available to guide decisions by staff at every level of the organization to improve implementation and client outcomes (Bertram, Blase, Shern, et al., 2011; Fixsen, Blase, et al., 2009).
Ideally, decision support data systems should be established or repurposed during program installation and initial implementation. However, these adjustments can occur at any time. For example, in year four of a 6-year SAMHSA Children’s Mental Health Initiative grant in Houston, Texas, administrators, supervisors, family members, and an implementation consultant determined that the existing data system did not support or inform wraparound implementation. Organized to support legally defined policy and practice requirements in child protective services, the site’s data system provided no model-pertinent information about wraparound team composition and structure; about the depth, breadth, or utility of wraparound team assessments; nor about the design, efficiency, or effectiveness of wraparound team interventions. Without timely model-pertinent case data to review, a risk containment supervisory focus shaped staff to seek guidance on an ad hoc basis during case crises. Thus, administrators or supervisors might not discover constraining factors to effective, sustainable program implementation until grant-required aggregate fidelity and outcome data were generated each year. This was a disservice to clients and an ineffective and inefficient means to develop staff competence in wraparound. Therefore, model-pertinent data forms that reenforced revised training content from wraparound’s theory bases were developed and used in biweekly review by the consultant, administrator, and supervisors. These implementation reviews generated the focus and formats for regularly scheduled, systematic staff coaching. After 18 months, these adjustments improved staff confidence and proficiency as wraparound fidelity and client outcomes improved in diverse community settings to well above the national mean (Bertram, Schaffer, et al., in press).
NIRN’s seminal monograph discussed the critical role of purveyors working with organization leadership to deliver a practice model effectively with fidelity (Fixsen et al., 2005). These roles and responsibilities are now integrated and differentiated as types of leadership needed for different circumstances that require either technical or adaptive strategies (Bertram, Blase, Shern, et al., 2011; Fixsen, Blase, et al., 2009; Heifetz & Laurie, 1997; Heifitz & Linsky, 2002). Heifetz and Laurie (1997) stated that a common leadership error is applying technical leadership strategies under conditions that call for adaptive leadership strategies. Not all leaders are willing or able to easily recognize or transition smoothly to and from technical and adaptive leadership strategies and styles. However, both are required for successful implementation and sustainability of outcomes (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
Technical leadership is appropriate in circumstances characterized by greater certainty and agreement about both the nature of the challenge and about the correct course of action. Challenges under these conditions respond well to more traditional management approaches that focus on a single point of accountability with clearly understood and well-accepted methods and processes that are known to produce fairly reliable outcomes (Daly & Chrispeels, 2008; Waters, Marzano, & McNulty, 2003). For example, once established, staff selection criteria and processes should employ clear, rather routine procedures. Likewise, once established, the flow of information through a data system should also rely upon clearly defined procedures. A breakdown in either implementation driver’s procedure would be commonly and readily understood and would have obvious solutions. Problems related to such circumstances would be resolved through technical forms of leadership.
Adaptive leadership is necessary when there is less certainty and less agreement about both the definition of the problems and their solutions. Complex, confusing, or less well understood conditions require adaptive leadership strategies such as convening groups to seek a common understanding of the challenge and to generate possible solutions through group learning and consensus (Daly & Chrispeels, 2008; Waters, Marzano, & McNulty, 2003). The implementation drivers of coaching, facilitative administration, and systems-level interventions are more likely to require more adaptive leadership strategies to determine what the problems are, what information and knowledge will be required to develop consensus about possible solutions, and then to monitor results of attempts to solve that problem (Fixsen, Blase, et al., 2009).
A practical example of adaptive leadership emerged in one of the programs evaluated in the review of program implementation at 34 Kansas City area service organizations. One county within the Missouri child protective services agency convened its administrators with administrative representatives from family court and the guardian ad litum’s office to clarify and address challenges to fidelity of the family support team model (FST) in which each system’s staff participated. Though FST teams were intended to develop individualized service plans shaped by family voice, they consistently produced the same service recommendations for nearly every family situation. This compromise of program fidelity had to be analyzed and commonly understood by leaders of participating systems and then, through building consensus, resolved (Bertram, King, et al., in press). The participatory evaluation of wraparound implementation in the Houston SAMHSA Children’s Mental Health Initiative grant offers another excellent example of adaptive leadership strategies. Instead of responding to concerns about wraparound fidelity and outcomes with more training (a technical solution), the team of family caregivers, supervisors, administrators, and consultant built consensus through evaluation, then adjusted and monitored multiple competency and organization driver adjustments (Bertram, Schaffer, et al., in press).
Implementation Stages, Organizational Change, and Social Work Curricula
Program implementation is a process, a series of activities that focus through practice model intervention components to repurpose or create the drivers of implementation. When administrators consider intervention components and focus through implementation drivers, program implementation unfolds through four stages in a 2- to 4-year process. Without an integrated focus through and between the framework of intervention components and the framework of implementation drivers, organizations waste time as well as financial and human resources if they address in piecemeal manner factors constraining fidelity and client outcomes. Without a systematic focus through the intervention and implementation components, organizations may even question the original match of program model to population and seek remedies by attempting model adaptations. NIRN emphasizes that program innovations should not be attempted until expected fidelity and client outcomes are achieved, and that program sustainability should be a focus of implementation activities in every stage (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011). Implementation stages provide yet another meaningful focus for social work courses on policy, program administration and evaluation, and staff supervision.
Stages of Implementation
In 2005, NIRN’s seminal monograph discussed program implementation as a six-stage process (Fixsen et al., 2005). However, ensuing years have seen clarification and refinement of this framework (see Figure 3). Innovation of a program or practice model is no longer considered a separate stage. Instead, innovation is now seen as beginning a new process of implementation activities that should only be considered after a program achieves targeted benchmarks of fidelity and population outcomes (Bertram, Blase, Shern, et al., 2011; Winter & Szulanski, 2001). Attempting innovation in a practice model prior to achieving these goals may constrain full development and application of PIP and PEP information cycles that could suggest adjustments to implementation drivers to better support achieving desired fidelity and outcomes. Furthermore, innovation before achieving full implementation benchmarks can contribute to staff and organizational confusion and inefficiency (Bertram, Blase, Shern, et al., 2011).
Once full implementation benchmark targets for fidelity and outcomes are achieved, as innovations are considered, the service organization must readdress exploration, installation, and initial implementation stage activities. Though Figure 3 visually may appear to imply a linear progression through stages of implementation, a significant change in socioeconomic conditions, funding, leadership, staff turnover, or other events may require the organization to reconsider program implementation and readdress activities of earlier stages (Bertram, Blase, & Fixsen, 2013). For example, three changes in grant leadership as well as the traumatic, disruptive effects of two major hurricanes adversely affected wraparound implementation in Houston. By evaluating and adjusting implementation drivers, the SAMHSA Children’s Mental Health Initiative grant site consciously chose to return to addressing installation and initial implementation stage activities (Bertram, Schaffer, et al., in press).
Finally, program sustainability was initially conceived as an end stage of implementation (Fixsen et al., 2005). However, ensuing discussions of implementation within and across fields of endeavor have clarified that program sustainability is never an end point but instead is an essential concern and focus within each activity in each stage of implementation (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
This stage has also been called “exploration and adoption.” In this stage, the assessment of community and organization resources, target population characteristics and needs, and their potential match with a practice model should consider both desired population outcomes and likely implementation outcomes. Unfortunately, thus far, most attention has been paid to client outcomes. For example, a review of two decades of literature on the wraparound model noted that most publications reporting outcomes examined client outcomes (n = 48), while only 15 publications presented implementation outcomes (Bertram, Suter, et al., 2011).
In the exploration stage of implementation, the program and its service organization should carefully consider the intervention components of its practice model. As discussed in previous sections of this entry, this process includes examination of target population characteristics, organization and community resources, as well as the practice model’s participants, elements, activities, and phases (model definition), the theory base(s) of these elements and activities, and the practice model’s theory of change. This exploration should guide the service organization’s decision to proceed or not to proceed with implementation of a program. However, as noted in this entry’s discussion of the Houston experience, thinking through intervention components and implementation drivers can also guide reconsidering implementation of current service models.
Potential resources, supports, and barriers should be examined in this stage. This includes but is not limited to funding sources and their requirements, current versus needed staffing patterns, referral sources, and other organization- and systems-level change that may be needed to support sustainable implementation of the program with fidelity to achieve desired client outcomes (Fixsen, Blase, et al., 2009). This exploration should address questions such as: How might this program or practice model be appropriate for and produce what outcomes in the target population and in the organization? If we proceed, what are tasks and timelines needed to facilitate its installation and initial implementation? This stage should end with a definitive decision and implementation plan. Proactive, small adjustments made in this stage of exploration can reap great benefits and efficiencies. Rushing this exploration will amplify future problems and challenges as the organization installs the program and begins service delivery (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
After a decision is made to begin a new program or to improve a current service model, there are key tasks to accomplish before consumers and other participants experience a change in practice. These tasks and associated activities define the installation stage of program implementation. In this stage resources begin to be applied to create or repurpose implementation drivers. These are instrumental concerns (Fixsen, Blase, et al., 2009). They require methodical examination and adjustment of implementation drivers (see Figure 2). These concerns can and should also be just as systematically addressed when an organization seeks to improve delivery of current services (Bertram, Schaffer, et al., in press).
Participants in installation-stage activities create or repurpose competency and organizational drivers for high-fidelity implementation and improved client outcomes. Participants may include not only administrators and staff from the host organization but also purveyors or consultants versed in the practice model and in implementation frameworks, as well as partners from other service systems and even representatives from the community and future or current consumers. Installation activities move beyond broad consideration and planning to systematically addressing each infrastructure component of the implementation drivers (Bertram, Blase, & Fixsen, 2013).
Specifically, model-pertinent criteria for staff selection and training should be developed (see previous discussion of these components in the section on competency drivers). The necessary formats, frequency, and focus for coaching should be described in the program’s practice manual, policies, and procedures. Data systems, policy, and procedural protocols should be developed for measuring fidelity. If the focus and actions of other service providers or systems may compromise program fidelity and client outcomes, then explicit cross-agency or systems protocols may need to be created through purposeful systems-level intervention by program administrators. A classic example of this need for explicit cross-systems protocol frequently occurs in the response of multiple agencies and systems to reports of child sexual abuse (Bertram, 2008; Bertram, King, et al., in press; Sternberg et al., 2001).
These and other activities do require time, attention, expenditures, and resource utilization, but they are essential. By focusing program installation on tackling instrumental resource concerns and on developing or repurposing the framework of implementation drivers (see Figure 2), an organization will be less likely to suffer the all too common error of inserting a new or refined practice model into an existing infrastructure and then producing poor fidelity and disappointing client outcomes (Bertram, Blase, Shern, et al., 2011).
Initial implementation of a new program or practice model is an inherently difficult and complex period for all staff. New practices require new actions to accomplish different tasks that may not yet be fully understood. In the stage of initial implementation, excitement about new service delivery meets fear of change, unexpected uncertainties, and investment in the status quo. This is an awkward period of high expectations, mixed with both anticipated and unexpected challenges and frustrations. To survive and thrive, the program must rely upon data systems, as well as its PEP and PIP information cycles, so it can learn from mistakes. In this stage, programs improve if they employ adaptive leadership strategies, systematically and systemically addressing challenges to fidelity rather than technically addressing each challenge in piecemeal manner (Bertram, Blase, Shern, et al., 2011).
Many constraining factors may emerge as new practices at every level of the organization are initially implemented, delivered to consumers, and experienced by participants within the organization and in other systems. People, organizations, and systems tend to become comfortable with or accustomed to the status quo. In the stage of initial implementation, concerns and uncertainty about changes in roles, responsibilities, and practices should be expected. Though there may be much outward enthusiasm during the exploration and installation stages, many staff at all levels will not fully embrace organization changes necessary to effectively implement the practice model. This very human tendency to be uncomfortable amidst change can combine with the natural challenge and complexity of implementing something new and test confidence in the decision to improve or implement a program. However, steady and adaptive leadership that normalizes these experiences and challenges, as well as refined and increased coaching and support for practitioners that activates and applies PIP and PEP information cycle problem solving will overcome the awkwardness of this stage while changing organizational culture and climate (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011; Bertram, Schaffer, et al., in press; Kimber et al., 2012).
Programs become inefficient, poorly executed, ineffective, and unsustainable when the host organization moves into full program implementation without developing, repurposing, and working through the framework of implementation drivers (see Figure 2). When model-pertinent implementation drivers are established, tested, and adjusted during installation and initial implementation stages, full implementation that achieves improved client outcomes with fidelity in a sustainable manner is more likely to occur (Fixsen, Blase, et al., 2009).
The time required to emerge from the awkwardness or uncertainties of initial implementation to full implementation will vary from setting to setting and practice to practice. Depending upon setting and practice, it may be very useful for the host organization to start small in the initial implementation activities, learn through PEP and PIP information cycles, make adjustments, then scale up. When most practitioners can routinely provide the new practices with good fidelity, they will more likely achieve client outcomes like those attained in research or in other service settings. Desired fidelity and outcomes are sustained when implementation drivers are accessible and functioning well using model-pertinent information (Bertram, Blase, & Fixsen, 2013; Bertram, Blase, Shern, et al., 2011).
Implications and Opportunities
Careful consideration and attention to NIRN’s intervention and implementation frameworks can improve organization climate and culture, staff competence and confidence, and program fidelity and client outcomes. Thinking through NIRN frameworks can also support a common focus for integration of theory, practice, policy, administration, and research courses in social work curricula. If thoroughly understood and applied in service organizations and in academia, these implementation frameworks can help bridge the science to service gap that some have called a chasm (Institute for Medicine, 2006). Social work administrators and educators should frequent, learn from, and contribute to forums provided by the National Implementation Research Network at http://nirn.fpg.unc.edu, as well as the biennial Global Implementation Conference and its young prodigy, the Global Implementation Initiative via http://www.implementationconference.org. By bringing discourse and lessons from these and other forums into our social work teaching, research, and practice, we can provide a context for exploring and beginning to resolve the differing and sometimes conflicted perspectives or misperceptions about evidence-based practice and meeting client needs, about program development and organization, and about supporting social work practitioner competence, confidence, and creativity.
Aarons, G. A., & Sawitzky, A. C. (2006). Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services, 3(1), 61–72.Find this resource:
Addis, M. E., Wade, W. A., & Hatgis, C. (1999). Barriers to dissemination of evidence-based practices: Addressing practitioners’ concerns about manual-based psychotherapies. Clinical Psychology: Science and Practice, 6(4), 430–441.Find this resource:
Ager, A., & O’May, F. (2001). Issues in the definition and implementation of “best practice” for staff delivery of interventions for challenging behavior. Journal of Intellectual & Developmental Disability, 26(3), 243–256.Find this resource:
Barwick, M. (2011). Masters level clinician competencies in child and youth behavioral health care. Report on Emotional and Behavioral Disorders in Youth, 11(2), 32–39.Find this resource:
Bellamy, J. L., Bledsoe, S. E., Mullen, E. J., Fang, L., & Manuel, J. (2008). Agency-university partnership for evidence-based practice in social work. Journal of Social Work Education, 44(3), 55–76.Find this resource:
Bertram, R. (2008). Establishing a basis for multi-system collaboration: Systemic team development. Journal of Sociology and Social Welfare, 35(4), 9–27.Find this resource:
Bertram, R., Blase, K., & Fixsen, D. (2013). Improving programs and outcomes: Implementation frameworks 2013. Manuscript submitted for publication.Find this resource:
Bertram, R. M., Blase, K., Shern, D., Shea, P., & Fixsen, D. (2011). Implementation opportunities and challenges for prevention and health promotion initiatives. Alexandria, VA: National Association of State Mental Health Directors.Find this resource:
Bertram, R., King, K., Geary, W. R., & Nutt, J. (in press). Program implementation: An examination of the interface of curriculum and practice. Journal of Evidence-Based Social Work.Find this resource:
Bertram, R., Schaffer, P., & Charnin, L. (in press). Changing organization culture: Data driven participatory evaluation and revision of wraparound implementation. Journal of Evidence-Based Social Work.Find this resource:
Bertram, R. M., Suter, J., Bruns, E., & Orourke, K. (2011). Implementation research and wraparound literature: Building a research agenda. Journal of Child and Family Studies, 20(6), 713–726.Find this resource:
Blase, K., Van Dyke, M., Fixsen, D., & Bailey, F. W. (2012). Implementation science: Key concepts: Themes and evidence for practitioners in educational psychology. In B. Kelly & D. Perkins (Eds.), Handbook of implementation science for psychology in education: How to promote evidence based practice (pp. 13–36). London: Cambridge University Press.Find this resource:
Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design. Cambridge, MA: Harvard University Press.Find this resource:
Daly, A. J., & Chrispeels, J. (2008). A question of trust: Predictive conditions for adaptive and technical leadership in educational contexts. Leadership and Policy in Schools, 7, 30–63.Find this resource:
D’Aprix, A. S., Dunlap, K. M., Abel, E., & Edwards, R. L. (2004). Goodness of fit: Career goals of MSW students and the aims of the social work profession in the United States. Social Work Education, 23(3), 265–280.Find this resource:
Denton, C. A., Vaughn, S., & Fletcher, J. M. (2003). Bringing research-based practice in reading intervention to scale. Learning Disabilities Research & Practice, 18(3), 201-211.Find this resource:
Eccles, M. P., & Mittman, B. S. (2006). Welcome to Implementation Science. Implementation Science, 1(1). doi:10.1186/1748-5908-1-1Find this resource:
Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540.Find this resource:
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network.Find this resource:
Green, R. G., Bretzin, A., Leininger, C., & Stauffer, R. (2001). Research learning attributes of graduate students in social work, psychology, and business. Journal of Social Work Education, 37(2), 333–341.Find this resource:
Hardcastle, D. A., & Bisman, C. D. (2003). Innovations in teaching social work research. Social Work Education, 22(1), 31–43.Find this resource:
Heifetz, R. A., & Laurie, D. L. (1997). The work of leadership. Harvard Business Review, 75(1), 124–134.Find this resource:
Heifitz, R. A., & Linsky, M. (2002). Leadership on the line. Boston, MA: Harvard School Press.Find this resource:
Henggeler, S. W., Schoenwald, S. K., Borduin, C. M., Rowland, M. D., & Cunningham, P. B. (2009). Multisystemic therapy for anti-social behavior in children and adolescents (2nd ed.). New York, NY: Guilford Press.Find this resource:
Howard, M., McMillen, C. J., & Pollio, D. E., (2003). Teaching evidence-based practice: Toward a new paradigm for social work education. Research on Social Work Practice, 13, 234–259.Find this resource:
Institute of Medicine Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders. (2006). Improving the quality of health care for mental and substance use conditions. Washington, DC: The National Academies Press.Find this resource:
Kimber, M., Barwick, M., & Fearing, G. (2012). Becoming an evidence-based service provider: Staff perceptions of organizational change. Journal of Behavioral Health Services and Research, 39(3), 314–332.Find this resource:
Manuel, J. I., Mullen, E. J., Fang, L., Bellamy, J. L., & Bledsoe, S. E. (2009). Preparing social work practitioners to use evidence-based practice: A comparison of experiences from an implementation project. Research on Social Work Practice, 19(5), 613–627.Find this resource:
Mullen, E. J., Bledsoe, S. E., & Bellamy, J. L. (2008). Implementing evidence-based social work practice. Research on Social Work Practice, 18(4), 325–338.Find this resource:
Roberts, A. R., & Yeager, K. R. (Eds.). (2004). Evidence-based practice manual: Research and outcome measures in health and human services. New York, NY: Oxford University Press.Find this resource:
Rubin, A., & Parrish, D. (2007). Views of evidence-based practice among faculty in Master of Social Work programs: A national survey. Research on Social Work Practice, 17(1), 110–122.Find this resource:
Rubin, D., Robinson, B., & Valutis, S. (2010). Social work education and student research projects: A survey of program directors. Journal of Social Work Education, 46(1), 39–55.Find this resource:
Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. It’s about integrating individual clinical expertise and the best external evidence. British Medical Journal, 312, 71–72Find this resource:
Schoenwald, S. K., Brown, G. L., & Henggeler, S. W. (2000). Inside multi-systemic therapy: Therapist, supervisory, and program practices. Journal of Emotional and Behavioral Disorders, 8, 113–127.Find this resource:
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2004). Toward effective quality assurance in evidence-based practice: Links between expert consultation, therapist fidelity, and child outcomes. Journal of Clinical Child and Adolescent Psychology, 33, 94–104.Find this resource:
Smith, C. A., Cohen-Callow, A., Hall, D. M., & Hayward, R. A. (2007). Impact of a foundation-level MSW research course on students’ critical appraisal skills. Journal of Social Work Education, 43(3), 481–495.Find this resource:
Sternberg, K. J., Lamb, M. E., Orbach, Y., Esplin, P.W., & Mitchell, S. (2001). Use of a structured investigative protocol enhances young children’s responses to free-recall prompts in the course of forensic interviews. Journal of Applied Psychology, 86(5), 997–1005.Find this resource:
Straus, S. E., Glasziou, P., Richardson, W. S., & Haynes, R. B. (2011). Evidence-based medicine: How to practice and teach it (4th ed.). New York, NY: Churchill Livingstone.Find this resource:
University of Houston Graduate College of Social Work. (2013, April 5–6). Bridging the research and practice gap: A symposium on critical considerations, successes, and emerging ideas. Retrieved June 2013, from http://www.sw.uh.edu/news/events/05292012-bridging%20the%20gap%202013/bridgingTheGapBooklet.pdf/
Waters, J. T., Marzano, R. J., & McNulty, B. (2003). Balanced leadership: What 30 years of research tells us about the effect of leadership on student achievement. Aurora, CO: Mid-Continent Research for Education and Learning.Find this resource:
Winter, S. G., & Szulanski, G. (2001). Replication as strategy. Organization Science, 12(6), 730–743.Find this resource:
Baker, L. R., Hudson, A., & Pollio, D. E. (2011). Assessing student perception of practice evaluation knowledge in introductory research methods. Journal of Social Work Education, 47(3), 555–564.Find this resource:
Daniels, A., England, M., Page, A. K., & Corrigan, J. (2005). Crossing the quality chasm: Adaptation to mental health and addictive disorders. International Journal of Mental Health, 34(1), 5–9.Find this resource:
Gambrill, E. (2007). Views of evidence-based practice: Social workers’ code of ethics and accreditation standards as guides for choice. Journal of Social Work Education, 43, 447–462.Find this resource:
Hall, G. E., & Hord, S. M. (2006). Implementing change: Patterns, principles and potholes (2nd ed.). Boston, MA: Allyn and Bacon.Find this resource:
Henggeler, S. W., Melton, G. B., Brondino, M. J., Scherer, D. G., & Hanley, J. H. (1997). Multisystemic therapy with violent and chronic juvenile offenders and their families: The role of treatment fidelity in successful dissemination. Journal of Consulting and Clinical Psychology, 65(5), 821–833.Find this resource:
Henggeler, S. W., Pickrel, S. G., & Brondino, M. J. (1999). Multisystemic treatment of Substance abusing and dependent delinquents: Outcomes, treatment fidelity, and transportability. Mental Health Services Research, 1(3), 171–184.Find this resource:
Henggeler, S. W., Schoenwald, S. K., Liao, J. G., Letourneau, E. J., & Edwards, D. L. (2002). Transporting efficacious treatments to field settings: The link between supervisory practices and therapist fidelity in MST programs. Journal of Clinical Child & Adolescent Psychology, 31(2), 155–167.Find this resource:
Howard, M., Allen-Meares, P., & Ruffolo, M. C. (2007). Teaching evidence-based practice: Strategic and pedagogical recommendations for schools of social work. Research on Social Work Practice, 17(5), 561–568.Find this resource:
Huser, M., Cooney, S., Small, S., O’Connor, C., & Mather, R. (2009). Evidence-based program registries. (Wisconsin Research to Practice Series). Madison, WI: University of Wisconsin Madison/Extension.Find this resource:
Jensen, P. S., Weersing, R., Hoagwood, K. E., & Goldman, E. (2005). What is the evidence for evidence based treatments? A hard look at our soft underbelly. Mental Health Services Research, 7(1), 53–74.Find this resource:
Marsiglia, F. J., & Booth, J. (2013). Cultural adaptations of interventions in real practice settings. Manuscript submitted for publication.Find this resource:
McBeath, B., & Austin, M. J. (2013). The organizational context of research minded practitioners: Challenges and opportunities. Manuscript submitted for publication.Find this resource:
Metz, A., Bartley, L., Ball, H., Wilson, D., Naoom, S., & Redmond, P. (2013). Active implementation frameworks for effective service delivery: Catawba county child wellbeing project. Manuscript submitted for publication.Find this resource:
Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation, 24, 315–340.Find this resource:
Mullen, E. J., Bellamy, J. L., Bledsoe, S. E., & Francois, J. (2007). Teaching evidence-based practice. Research on Social Work Practice, 17(5), 574–582.Find this resource:
Proctor, E. K., Knudsen, K. J., Fedoravicus, N., Hovmans, P., Rosen, A., & Perron, B. (2007). Implementation of evidence-based practice in community behavioral health: Agency director perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34(5), 479–488.Find this resource:
Rubin, A., Robinson, B., & Valutis, S. (2010). Social work education and student research projects: A survey of program directors. Journal of Social Work Education, 46(1), 39–55.Find this resource:
Schoenwald, S. K., Chapman, J. E., Sheidow, A. J., & Carter, R. E. (2009). Long-term youth criminal outcomes in MST transport: The impact of therapist adherence and organizational climate and structure. Journal of Clinical Child and Adolescent Psychology, 38(1), 91–105.Find this resource:
Thyer, B. (2013). Preparing current and future practitioners to integrate research in real practice settings. Manuscript submitted for publication..Find this resource:
Thyer, B. A., & Pignotti, M. (2011). Clinical social work and evidence-based practice: An introduction to the special issue. Clinical Social Work Journal, 39(4), 325–327.Find this resource:
Webster-Stratton, C., & Herman K.C. (2010). Disseminating Incredible Years Series Early Intervention Programs: Integrating and sustaining services between school and home. Psychology in the Schools, 47, 36–54.Find this resource:
Webster-Stratton, C., Reinke, W. M., Herman, K. C., & Newcomer, L. (2011). The Incredible Years Teacher Classroom Management Training: The methods and principles that support fidelity of training delivery. School Psychology Review, 40(4), 509–529.Find this resource:
Winter, S. G., & Szulanski, G. (2001). Replication as strategy. Organization Science, 12(6), 730–743.Find this resource: