Culturally Responsive Evaluation as a Form of Critical Qualitative Inquiry
Summary and Keywords
As a form of applied research, program evaluation is concerned with determining the worth, merit, or value of a program or project using various research methods. Over the past 20 years, the field of program evaluation has seen an expansion in the number of approaches deemed useful in accomplishing the goals of an evaluation. One of the newest approaches to the practice of evaluation is culturally responsive evaluation. Practitioners of CRE draw from a “responsive approach” to evaluation that involves being attuned to and responsive toward not only the program itself, but also its larger cultural context and the lives and experiences of program staff and stakeholders. CRE views culture broadly as the totality of shared beliefs, behaviors, values, and customs socially transmitted within a group and which shapes group members’ world view and ways of life. Further, with respect to their work, culturally responsive evaluators share similar commitments with scholars to critical qualitative inquiry, including a belief in moving inquiry (evaluation) beyond description to intervention in the pursuit of progressive social change, as well as positioning their work as a means by which to confront injustices in society, particularly the marginalization of people of color. Owing to these beliefs and aims, culturally responsive evaluators tend to lean toward a more qualitative orientation, both epistemologically and methodologically. Thus, when taken up in practice, culturally responsive evaluation can be read as a form of critical qualitative inquiry.
Culturally Responsive Evaluation Overview
As a form of applied research, program evaluation is concerned with determining the worth, merit, or value of a program or project using various research methods (Scriven, 1991). While basic research is concerned with generating and testing theories or contributing to knowledge production without necessarily being linked to action, evaluation employs similar methods of collecting and analyzing data with the explicit and exclusive intent of improving a program’s effectiveness or informing programmatic changes or improvements (Patton, 2002). Over the past 20 years, the field of program evaluation has seen an expansion in the number of evaluation approaches deemed useful in accomplishing an evaluation’s goals. One of the newest approaches to the practice of evaluation is culturally responsive evaluation (CRE).
CRE views culture broadly as the totality of shared beliefs, behaviors, values, and customs socially transmitted within a group, and which shapes group members’ world view and ways of life. Centering culture in an evaluation requires close attention to the material realities of both program staff and participants. In addition, practitioners of CRE draw from the “responsive” tradition (Stake, 1976) in evaluation which involves being attuned and responsive to not only the program itself but also to its larger cultural context and the lives and experiences of program staff and stakeholders. A final aspect of culturally responsive evaluation is its adherents’ commitment to using evaluation as a means of improving the lives of minoritized and marginalized communities. Consequently, in coming to an understanding of a program, particularly those programs intended to benefit members of marginalized communities, that understanding must be informed by thoughtful consideration of the individual, institutional, and societal processes that lead to marginalization.
Given these aims and commitments, culturally responsive evaluators share similarities with scholars who engage in critical inquiry in that they believe in moving inquiry (evaluation) beyond description to intervention in the pursuit of progressive social change, as well as positioning their work as a means of confronting injustices in society, particularly the marginalization of people of color. As such, CRE tends to be adopted by evaluators who have either experienced and/or studied marginalization as well as those who have developed a sophisticated understanding of the importance of intersectional analyses of the ways in which individuals’ experiences are shaped by the complex interactions between their race, ethnicity, gender identity, sex, language, and a host of other demographic variables that significantly inform life outcomes.1 Thus, CRE can be understood as a form of critical inquiry in which the determination of the merit or worth of a program is inherently tied to an evaluator’s ability to be responsive to the multifaceted ways in which culture operates within a program (and within its stakeholders) and chooses culturally appropriate methodologies for accurately capturing a program’s cultural manifestations.
Framing Culturally Responsive Evaluation as Critical Qualitative Inquiry
In answering the question, “What is critical qualitative inquiry?,” Denzin (2015) connects critical qualitative inquiry (CQI) to “the pursuit of social justice within a transformative paradigm” that “challenges prevailing forms of inequality, poverty, human oppression, and injustice” (p. 31). This paradigm, he argues, is “firmly rooted in a human rights agenda” and “requires an ethical framework that is rights and social justice based” (p. 31). Further, this pursuit “encourages the use of qualitative research for social justice purposes, including making such research accessible for public education, social policy-making, and community transformation” (p. 32).
We argue that culturally responsive evaluators position their work similarly. Indeed, CRE can be understood, framed, or taken up as a form of critical inquiry that draws heavily on “qualitative research [methods] for social justice purposes” and which results in the production of guidance and recommendations specifically tailored to be publicly accessible, to inform social and educational policymaking, and to facilitate transformation for the members of the communities whom the program serves. CRE adherents believe that its framework (resultant from the synthesis of the three core terms that constitute CRE) offers strong theoretical grounding for their evaluation practice. Further, when combined with their critical orientation and perspectives, and commitments to social justice, the theoretical grounding in which CRE is situated manifests a practice that heavily favors qualitative methods of inquiry as well as a general ethos or orientation toward qualitative ways of knowing.
Scholars of culturally responsive evaluation and critical qualitative inquiry share a belief that those who do not possess a sensitivity to or appreciation for cultural differences (particularly between evaluators/researchers and programs’ stakeholders/participants) can produce results that are inherently flawed. Consequently, when evaluations are designed, they must be done so in a manner that is responsive to stakeholders’ cultural values and beliefs, including what they deem to be useful “data” that can stand as “evidence” they would be willing to accept in the final judgment of a program, its “success,” and their experiences with(in) it. As such, culturally responsive evaluators and critical qualitative researchers openly disavow evaluation processes and inquiry methods that are thought to be “culture free,” and instead draw upon both methods and methodologies capable of shedding greater light on the ways in which culture informs the lifeworlds of the program and its stakeholders.
In taking up an evaluation practice that explicitly acknowledges the importance of documenting and reflecting the cultural dimensions of a program (as manifested in program features and components as well as situated within the stakeholders themselves), culturally responsive evaluators frequently draw upon a critical and constructivist orientation in which the process of coming to know requires long-term engagement with others in order to better understand how they view the world (Glesne, 2006). Owing to these beliefs and aims, culturally responsive evaluators tend to lean toward a more qualitative orientation, both epistemologically and methodologically. Thus, when taken up in practice, culturally responsive evaluation can be read as a form of critical qualitative inquiry.
It is worth noting, however, though we seek to position culturally responsive evaluation as a form of critical qualitative inquiry, many evaluation approaches in addition to CRE can be and are executed through the deployment of mixed methods and thus necessitate choosing survey instruments, content knowledge assessments, rubrics, or other instruments to both determine and examine outcomes as well as to explore potential causality. Nonetheless, owing to a belief (shared by both CRE and CQI scholars who operate within the critical intellectual tradition) in knowledge as situational and context-bound (Hopson, 2009), in the importance of dialogue between evaluators and stakeholders, in arriving at an interpretative understanding of stakeholders’ experiences, and in the importance of understanding how sociocultural processes and structures mediate those experiences (Prasad, 2005), we know of no culturally responsive evaluations that have been conducted solely (or even primarily) through the utilization of quantitative methodologies.
The Development of Culturally Responsive Evaluation
As a unique approach to the practice of evaluation, the roots of culturally responsive evaluation can be found in the work of African American evaluators during the 1930s, 1940s, and 1950s (Frazier-Anderson & Bertrand Jones, 2015; Hood, 2005; Hood, Hopson, & Kirkhart, 2015; Hopson & Hood, 2005). These evaluators, including Aaron Brown, Reid E. Jackson, Leander Boykin, and Rose Butler Brown, laid the groundwork for what would later become CRE by raising important questions about the need for evaluations that took into consideration the unique experiences and circumstances of African Americans, particularly evaluations conducted in schools for African American students. Further, their works reveal some of the earliest calls for evaluations to reflect the voices and perspectives of multiple stakeholders.
Modern-day evaluators and educational researchers would also make significant contributions to the approach, though several years before its formal conceptualization. Those evaluation scholars who are credited with the birth of culturally responsive evaluation as a recognized approach in the field of evaluation (e.g., Hank Frierson, Stafford Hood, Rodney K. Hopson, and Karen Kirkhart), drew heavily from the works of Ralph Tyler (considered the father of modern evaluation) and Robert Stake (considered the forebearer of responsive evaluation). They would also draw from the work of contemporary educational scholars such as Gloria Ladson-Billings and Carol Lee whose work on culturally responsive pedagogy was vital in the formation of CRE, as well as from Edmund Gordon and Sylvia Johnson whose work around the need for and development of educational assessments that were culturally sensitive would play a significant role (Hood, Hopson, & Kirkhart, 2015).
The work of Stafford Hood and colleagues in the early 1990s began the process by which the aforementioned works would be drawn together to arrive at culturally responsive evaluation as a unique orientation to evaluation work. They argued for the need for evaluation designs and evaluation practice that fully considered the role of culture in evaluation, as well as elevating the importance of shared lived experiences between a program’s stakeholders and members of the evaluation team. The articulated framework that currently guides the work of CRE adherents was published in two iterations of the National Science Foundation’s User-Friendly Handbook for Project Evaluation published twice (Frierson, Hood, & Hughes, 2002; Frierson, Hood, Hughes, & Thomas, 2010). In those handbooks, Frierson and colleagues attempted to flesh out culturally responsive evaluation’s theoretical parameters as articulated in nine steps that constitute a typical evaluation’s process, providing a road map for those who wish to employ the approach.
In the past decade and a half, CRE as a distinct approach to evaluation has witnessed significant growth not only in the number of its practitioners but also in the number of published evaluation reports, evaluation guides, and refereed journal articles that make up its growing literature base. This literature expansion stems from the growing numbers of researchers, evaluators, and consultants who position their evaluation efforts as a vital response to the nation’s rapidly changing and increasingly diverse demographics. Adherents and practitioners argue that by understanding their clients’ cultural values and beliefs, as well as tending to the sociopolitical and cultural contexts in which programs are situated, they better position themselves to both advocate for the stakeholders they serve (often members of marginalized communities) and provide them with culturally responsive evaluation services in hopes of improving their lives and circumstances.
Presently, culturally responsive evaluation enjoys prominent status in important national organizations such as the American Evaluation Association, which, in addition to serving as a champion for the fundamental role cultural context plays in evaluation (e.g., see AEA’s 2011 Statement on Cultural Competence in Evaluation), is also the sponsor of the Graduate Education Diversity Internship Program which seeks to increase the number of graduate students of color and from other underrepresented groups in the evaluation field. In addition, the study of both culturally responsive evaluation and culturally responsive educational assessments has taken up permanent residency in institutions of higher education such as the Center for Culturally Responsive Evaluation and Assessment which has two homes: one in the University of Illinois’s Urbana-Champaign College of Education and another at Dublin City University in Ireland which serves to highlight CRE’s appeal to evaluators in both the United States and overseas.
The Culturally Responsive Evaluation Process: Nine Steps
Though adopting a culturally responsive evaluation approach can be intellectually challenging and time-consuming, scholars have provided substantial direction for interested evaluators. For example, in their guide to conducting culturally responsive evaluations, Frierson et al. (2010) provide a useful outline of the nine-step process necessary for centering culture in an evaluation: (1) preparing for the evaluation, (2) engaging stakeholders, (3) identifying the purpose and intent of the evaluation, (4) framing the right questions, (5) designing the evaluation, (6) selecting and adapting instrumentation, (7) collecting the data, (8) analyzing the data, and (9) disseminating and utilizing the results. Hood et al. (2015) note that while “CRE does not consist of a unique series of steps set apart from other evaluation approaches, the details and distinction of CRE lie in how the stages of the evaluation are carried out” (p. 287).
Thus, each of the nine steps, while essential to any evaluation, carry special considerations when culture is centered. Following are brief explanations of each of the steps, including a discussion of how culture and qualitative methods of inquiry are engaged in each step (see Figure 2).
Step 1: Preparing for the Evaluation
In CRE, preparing for the evaluation is one of several steps that is significantly informed by critical qualitative ways of knowing. Preparation is the most vital step for any evaluator, regardless of his or her approach; however, in CRE, the preparation phase is an ideal time to begin engaging deeply with questions about self and others within the sociocultural context of the program to be evaluated. For example, evaluators are encouraged to begin by reflecting on their own values, assumptions, and biases, particularly as they pertain to the program, its staff, and its stakeholders. Next, spending extended time (through site visits, interviews, focus groups, etc.) collecting background information on the program with an eye toward understanding cultural context and cultural norms is vital.
Another essential step in preparing for a CRE is assessing the extent to which there is cultural congruence or shared lived experience between the program and the evaluation team. Frierson et al. (2010) note that while racial or ethnic similarity to participants should be an important consideration, one should not assume that these similarities necessarily equate to cultural responsiveness. As program context is examined, evaluators can then discern from this information who may be well suited to serve on the evaluation team. Regardless of how diverse the team’s membership may be, evaluators engaging in a CRE should demonstrate a clear commitment to being responsive to the program’s cultural context by engaging in continuous examination of and responsiveness to all components of which the program is constituted.
Step 2: Engaging Stakeholders
As it is in critical qualitative inquiry, relationship-building is critical in CRE. Evaluators committed to CRE’s social justice aims cannot fully take up evaluation as advocacy without coming to understand and value the stakeholders connected to a program including (and especially) those whom the program is intended to benefit. In engaging with stakeholders, evaluators are encouraged not only to seek out multiple and varied perspectives (ensuring that individuals from all sectors are heard), but also to think through issues of power that are likely to play out during the evaluation process. Much like their colleagues who engage in critical qualitative inquiry, a culturally responsive evaluator’s ability to seek possible avenues for mitigating the impact of power dynamics (having various stakeholders serve on advisory boards or findings-review committees, etc.) can have significant implications for the evaluation.
Step 3: Identifying the Purposes and Intent of the Evaluation
From a technical perspective, identifying the purpose and intent of the evaluation involves determining what kind of evaluation is to be conducted (i.e., process evaluation, progress evaluation, or summative evaluation), ascertaining all of the program’s various stakeholder groups (both internal and external), assessing whether the program’s goals and objectives are culturally appropriate given the target population, examining the program’s various component parts and the connections between them, as well as unearthing the means by which an understanding of the program’s implementation and subsequent impact will be evaluated. Culturally responsive evaluators working to identify an evaluation’s purposes should engage in focused discussions (with project staff, decision makers, and those benefiting from the program) to gain an understanding of what each group hopes to gain from the evaluation. Once again, accurately identifying and clarifying an evaluation’s purpose and intent, based on the perspectives of multiple stakeholders, will depend heavily on the evaluation team’s efforts to build relationships with appropriate parties.
Step 4: Framing the Right Question
After undertaking the initial work of engaging stakeholders and identifying the purposes of the evaluation, the next step in a CRE is to determine the evaluation questions intended to guide the process. Importantly, a culturally responsive evaluator must be guided by questions that are of significance to stakeholders at all levels. The development of evaluation questions is about deciding what is most important about the program, including what initiatives and desired outcomes are valued above others. And much like the qualitative researcher whose work may be funded by an external organization, the culturally responsive evaluator may need to respectfully tack back and forth between the needs of the program and the wishes of the evaluation’s funders. This work can be both challenging and highly political.
Beyond ensuring that the evaluation questions are crafted to meet the needs of multiple individuals invested in the program, during the question-framing phase evaluators also must address the issue of what will be considered evidence. Given that different lived experiences engender different epistemologies, determining what forms of evidence stakeholders will accept as credible is vital. Deciding upon credible forms of evidence can ensure that appropriate and culturally responsive methods of inquiry and assessment are used for gathering data about the program, leading to findings that will both resonate with and be taken up by program stakeholders. Moreover, the data collection plan must ensure that data will be collected from various stakeholders whose judgments are trusted by other stakeholders. Once evaluation questions have been developed, they must then be vetted to confirm that stakeholder interests have been incorporated. Here again, the use of site visits, feedback sessions, and/or panel discussions can assist in ensuring the evaluation’s responsiveness.
Step 5: Designing the Evaluation
Closely related to framing the evaluation questions is designing the evaluation which must be guided by a clear understanding about the type of evaluation to be conducted and, most important, by determining what types of data are required to fully answer the evaluation questions, how they will be collected, when they will be collected, and from whom they will be collected, as well as who will be responsible for collection. In addition, decisions need to be made regarding how the data will be analyzed and reported in ways that are responsive to the evaluation questions and culturally informed by those individuals who will serve as the sources of data.
During the evaluation design phase, evaluators also may draw upon cultural understanding acquired during the previous steps to think through how what can been learned might inform their analyses, including decisions around appropriate analytical processes and choices such as how data might be disaggregated to answer questions about categories that are culturally relevant for program participants or aggregated to protect confidentiality of groups of participants who may feel vulnerable or marginalized within the context of the program. The sensitivity toward, reflection upon, and anticipation of how data collection and analysis may impact the lived experiences of individuals connected to a program is a decidedly qualitative way of thinking/being in the world. Consequently, evaluators engaging in CRE may need to ask themselves if additional time should be built into data collection timelines to make space for more relationship-building with stakeholders prior to collecting data from them.
Step 6: Selecting and Adapting Instrumentation
In the qualitative realm, selecting and adapting instruments might involve the creation of interview protocols or the development of observation protocols or guidelines. From a CRE perspective, the evaluation team must create, review, and pilot-test instruments and protocols keeping culture front and center, checking for sources of bias in both the language and content of instruments. Importantly, qualitative researchers familiar with more participatory forms of inquiry will be particularly adept at exploring the instrument development process for opportunities where program staff and/or participants can make important contributions, as well as identifying and providing the appropriate guidelines or necessary training for their successful participation in the data collection and analysis processes.
Once instruments and protocols have been created and potentially piloted, feedback should also be solicited from stakeholders. If concerns are raised or adaptations warranted (e.g., the need for translating a survey or interview protocol into multiple languages to capture the experiences of more program participants), they should be addressed, and instruments re-reviewed again for cultural sensitivity before being deployed for data collection.
Step 7: Collecting the Data
Once the evaluation design is in place, and methods have been decided upon, the data collection process must be engaged with sensitivity and responsiveness to the program’s and participants’ cultural context and cultural norms. For example, evaluators should look for naturally occurring opportunities to meet their data collection needs in order to minimize any potential challenges stakeholders might encounter. It may be necessary for members of the evaluation team to travel to community centers or churches after working hours to eliminate any travel issues that might prevent participants from contributing to the process.
Programs’ and participants’ cultures are often revealed through participation in program activities and engagements. Consequently, as one of the most vital instruments for generating data in the evaluation process, culturally responsive evaluators frequently intermingle with and solicit information (i.e., data) from a variety of stakeholders, often through interviewing and observing. As “the data collection instrument,” evaluators engaging in CRE must remain vigilantly aware of their own cultural positioning in the communities or settings in which they are collecting data, including how their presence affects those who are being interviewed or observed. To ensure that the data collected are trustworthy, an evaluator acting from a CRE perspective may determine that additional time is needed at a site or with a particular group of participants in order to build trust and rapport, and also to better understand how to act responsively to the culture of participants.
Step 8: Analyzing the Data
While some evaluation approaches position data analysis as a neutral act, culturally responsive evaluators must constantly account for cultural context as they make meaning of the data collected. As Hood et al. (2015) remind us, “Data do not speak for themselves; they are given voice by those who interpret them” (p. 295). To address potential blind spots in interpreting data, culturally responsive evaluation teams should engage evaluators (or researchers) who can serve as cultural interpreters and invite them to participate in a collaborative interpretation process. Indeed, many evaluators engaging in CRE speak to the benefits of employing the perspectives of multiple program stakeholders in the interpretation process. In seeking stakeholders’ personal, culturally informed insights, evaluators are not only able to unearth nuances regarding a program’s benefits (and challenges) in new and generative ways but are also able to tap into stakeholders’ understandings of the program to reveal unintended consequences (both positive and negative) that might otherwise go unnoticed.
Step 9: Disseminating and Using the Results
Within the context of culturally responsive evaluators’ larger social justice commitments, which are often shared by adherents to critical forms of qualitative inquiry, the ultimate goal of the evaluation process is to create positive change in stakeholders’ lives. Thus, having ensured that the evaluation’s findings have been vetted by the appropriate parties, and that the interpretations constructed accurately map onto the experiences of the program and its participants, the ninth and final step of the approach requires creating multiple mechanisms to increase the evaluation’s chances for tangible impact. Several considerations, such as how to best disseminate the results in ways that are legible for multiple stakeholders as well as tailoring recommendations for each group in ways they will find actionable, must be discussed. Indeed, given the scope of individuals likely to be touched by the findings and their implications, concerted efforts must be invested in developing culturally sensitive forms of re-presentation which may require multiple formats (full report, executive summary, highlights brochure, etc.) as well as their presentation in multiple venues (formal presentations in the program’s offices or funder’s headquarters, informal community forums, etc.). An evaluation report deemed culturally responsive, then, is one in which stakeholder groups can see themselves and their experiences reflected in it, as well as one that deepens their understanding of the importance of their role in ensuring that the program’s goals and objectives are ultimately attained.
Though the framework for conducting culturally responsive evaluation (Frierson et al., 2002; Frierson et al., 2010) does not have a tenth step, the ethical considerations that must be taken up by the evaluation team represent a critical stance toward the entire evaluation process. Much like their critical qualitative colleagues, culturally responsive evaluators are guided by an ethical code that informs decision-making throughout their inquiry. Both groups share an underlying commitment to ensuring that the re-presentation of the phenomenon under investigation reflects an authentic understanding.
That understanding derives its authenticity not only from the relationship-building (characterized by a mutual respect and connectedness between the evaluation team and program stakeholders) that occurs throughout the process but also from careful attention to the power dynamics (present in all forms of inquiry) that often motivate the creation of multiple opportunities for stakeholders to contribute to the process, from start to finish. Further, authentic understandings accurately capture how a program is implemented—a re-presentation that is not too heavily weighed down by the lenses of the evaluation team members (be they theoretical or methodological). Moreover, the re-presentation of their understanding, as captured in the evaluation’s findings or recommendations, is framed in ways designed to avoid further marginalization of the populations the program was intended to support.
For culturally responsive evaluators, authentic understandings are derived from extended time in the field (or, in this case, extended time with the program and its participants) while being situated in, and tending closely to, how the program functions within a unique context informed by its diverse stakeholder groups. It is through these extended engagements that evaluators are uniquely positioned to pick up on the nuanced ways that culturally based groups, and particularly those from marginalized communities, contend with issues of race, class, gender, language, and other factors that hold material consequences for their lives, and thus shape their engagements with the program in question. Importantly, authentic understandings—ones constructed through a process in which you “have asked the right questions, gathered correct information, drawn valid conclusions, and provided evaluation results that are both accurate and useful” (Frierson et al., 2010, p. 93)—can enhance stakeholders’ confidence that the evaluation team has genuinely understood their experiences, which can then increase the likelihood that they will utilize the evaluation’s findings and recommendations.
CRE as CQI
In this section, we draw upon existing articles and book chapters on CRE in practice to highlight the ways in which that practice can be understood as a form of critical qualitative inquiry. We selected four studies that demonstrate the strong focus on qualitative ways of knowing and methodological tools present in many culturally responsive evaluations. White and Hermes (2005) describe the planning for the evaluation of the Hopi Teachers for Hopi Schools program, which was a long-term alternative certification program. The program was designed to prepare 20 new Hopi teachers skilled in culturally responsive teaching to work in elementary schools on the Hopi reservation. King, Nielsen, and Colby (2004) conducted a culturally responsive evaluation study in a large, midwestern school district that employed multicultural steering committees to transform four schools (two elementary schools, a middle school, and a high school). Zulli and Frierson (2004) detail a culturally responsive evaluation of a federally funded Upward Bound program for high school students, examining the intersection of a culturally responsive program design and a CRE. The work of White and Senese (2015) deals with the Alch’I’ni Ba program, which used Navajo cultural practices to encourage college completion among Navajo youth. White and Senese offer a response to an evaluator who demonstrated their lack of cultural responsiveness by publishing an article on the program without seeking the approval of the Alch’I’ni Ba staff. While few, if any, evaluation studies (published or unpublished) manifest each of the nine CRE steps, the examples shared here reflect at least five of the nine steps wherein critical qualitative ways of knowing appear most relevant.
CRE as CQI: Engaging Stakeholders
Evaluators have found myriad ways to engage stakeholders within culturally responsive evaluations. When describing their evaluation of an alternative teacher certification program for Hopi teachers, White and Hermes (2005) focus on the questions around stakeholder engagement arising from their work, noting that they were “acutely aware” (p. 119) of the multiple stakeholders of their evaluation. Some stakeholders, such as the funder (in this case, the Office of Indian Education within the US Department of Education) have relatively straightforward needs. Other stakeholders, however, have complex and varied needs, particularly considering the evaluators’ goal of addressing cultural responsiveness. The authors share a long list of the stakeholders whose needs they must address, including tribal education departments and endowment funds, higher education entities involved in the alternative certification program, the future students of program graduates, education researchers, and advisory board members. They noted that they viewed the Hopi community, elders, and institutions as the most important stakeholders but also the stakeholders whose needs were the most difficult and complex to address. They outlined some of the central questions they encountered when beginning the process of engaging stakeholders in a culturally responsive manner, including how to convene individuals for dialogue about the program activities and related evaluation activities, how to identify a Hopi cultural event where they might conduct focus groups, and how and where to effectively communicate their findings to the Hopi community.
One of the most exhaustive examples of engaging stakeholders both in the planning and execution of a CRE is provided by Kinget al. (2004). In their article detailing the dilemmas they encountered in conducting a CRE across four schools with each implementing multicultural programming in a large midwestern school district, they describe engaging stakeholders across sites by developing a multicultural steering committee (MSC) to run and oversee the evaluation, paying close attention to building relationships, and sponsoring meetings for the broader community to gather feedback on the evaluation. The MSC, which included culturally diverse district staff and liaisons from each school, ran the study with support and guidance from the evaluation team. The MSC group met monthly and was involved in every aspect of the evaluation from framing the evaluation questions to designing the evaluation process to checking the instruments, analyzing the data, and ultimately developing recommendations. Site-based liaisons from the MSC coordinated communication and data collection in their respective schools and kept the larger committee apprised of their progress each month. The evaluators were able to engage stakeholders deeply in a way that attended to culture by paying “specific attention to relationship-building in an interdependent social setting” (p. 70). To build these relationships and to ensure that all participants had a voice and were treated respectfully, the MSC developed meeting guidelines. One of the most important parts of engaging stakeholders in a CRE is involving members of the broader community. Therefore, the MSC group reached out to the larger community through evening meetings to make certain that the program’s activities and desired outcomes were meaningful to the broader community.
CRE as CQI: Selecting and Adapting Instrumentation
In their culturally responsive evaluation of an Upward Bound program for high school students, Zulli and Frierson (2004) described how cultural responsiveness informed their selection and adaptation of instruments. They addressed the process by which they developed interview and focus groups questions for Upward Bound participants that were focused on certain cultural variables that might typically be left out of a traditional evaluation. They noted that in addition to exploring program staff’s perceptions about the program itself and their role within the program, they also asked them to share about their life experiences and their understandings of the mission of the program in which they worked. Furthermore, they delved into staff members’ feelings and attitudes toward the students served by the program. This style of interviewing differs from those associated with more traditional evaluation approaches that focus primarily (or solely) on asking about stakeholders’ experiences with specific program elements. In particular, the evaluators noted that these kinds of interviews allowed them to assess the coherence between the staff members’ experiences and those of the program’s participants.
In the evaluation by King et al. (2004), they worked with the MSC to determine the questions that the study would answer and subsequently determined that multiple data collection methods should be adopted. Here, their work highlights some of the potential challenges of engaging stakeholders in all aspect of the evaluation, given that some committee members insisted on including classroom observations as a method of data collection to target particular teachers who “were not practicing multiculturally competent education” (p. 74). Ultimately, despite concerns, the evaluation team conducted pre- and post-observations with follow-up interviews. Of the resultant instrument, they stated “our formal research-based observation protocol called for documenting examples of explicit multicultural instructional practices, academic expectations, and social relationships” (p. 74).
CRE as CQI: Analyzing the Data
Once data have been collected using culturally validated instruments by a team sensitive to culture, those conducting a CRE must wrestle with how to make meaning of what they have collected and how to transform that data into actionable and meaningful recommendations. For example, White and Hermes (2005) described some anticipated tensions in the analysis and sharing of data from the evaluation of the Hopi teachers program in ways that were sensitive to culture, as well as to structural challenges and stereotypes encountered by program participants. They struggled with how to respectfully share some of the very real obstacles that students encountered in successfully completing the teacher preparation program, such as alcoholism or spousal abuse, without reproducing negative stereotypes about native peoples. They also noted that, in evaluating a small program, such as the one they had undertaken, sharing such intimate details may risk the participants’ anonymity.
In their examination of how CRE informs data analysis, Zulli and Frierson (2004) shared that they drew explicitly upon similarities and differences along cultural variables such as race to analyze relationships between program staff and participants. They asserted that the role that certain cultural variables play in the lives of program participants is often given insufficient analytic attention in evaluations; thus, by ignoring cultural components such as sociodemographic variables, evaluators miss important aspects of what may or may not be working in a program and why. Once analyzed, the Upward Bound evaluation team continued to demonstrate principles of CRE by seeking feedback from program stakeholders on the themes that the evaluators had constructed from the data.
CRE as CQI: Disseminating and Utilizing the Results
King et al. (2004) describe some basic principles of CRE related to the dissemination and use of the results highlighted in their work. One of these principles reflects the belief that the aim of evaluations should not be to punish individuals but rather, “the evaluation process should encourage and support people to use its findings to create meaningful change” (p. 78). In their piece on CRE, White and Senese (2015) respond to another researcher/evaluator who did not demonstrate cultural responsiveness in the dissemination of evaluation results for a program designed to help Navajo youth complete college by incorporating aspects of Navajo culture. Instead of involving program staff and other stakeholders in disseminating and using evaluation results, as should be standard for CRE, that researcher/evaluator has published an article about the evaluation without first clearing it with program staff. White and Senese argued that a significant amount of the evaluator’s critique stemmed from flawed processes and procedures that could have been addressed had the researcher/evaluator followed CRE methodological precepts (and good qualitative methodology in general) such as member-checking and allowing stakeholders/participants to provide feedback on findings before finalizing and sharing reports publicly.
New Theoretical and Methodological Possibilities for Culturally Responsive Evaluation
Although CRE as an approach has been used by evaluators for nearly 20 years and has become increasingly prominent in respected corners of the evaluation field, some evaluation theorists and practitioners have begun calling for a more explicit examination of the ways in which race and racism might be taken up as a part of the evaluation process. Using race as a primary lens is not intended to dismiss other forms of oppression and discrimination but rather is a way to draw attention to common racialized assumptions and forms of exclusion that permeate every institution and field of social science (Sensoy & DiAngelo, 2017), including the field of evaluation. Currently, the catch-all term “culture” in CRE has been used frequently as a proxy for how race operates (particularly in educational settings) without dealing explicitly with the historical legacies and current realities surrounding race in the United States and its resulting implications for both the approaches and methodologies used by evaluators. House (2017) argued that evaluators must become aware of racial framing to discover and manage racial bias in the same way that evaluators deal with other threats to validity. Awareness of racial framing, then, would be a first step in reducing racial bias in evaluation thinking and practice. By engaging with race and racial bias directly rather than using culture as a proxy for race, evaluators might better examine the programs and policies they evaluate to understand how they may inequitably impact and marginalize racial minorities.
Others have considered not only how the field of evaluation might frame the problems addressed by educational initiatives and programs through a racial lens but also how evaluation methodologies themselves might explicitly draw upon critical race theory (CRT). Initially, when Parker (2004) and Noblit and Jay (2010) explored the potential for incorporating CRT into the field of evaluation, they were skeptical that CRT would become a common evaluation strategy given the risks associated with CRT’s commitment to “speak back to power” and “counter the majoritarian White story.” Recently, however, in response to the lack of direct engagement with race, the American Evaluation Association (AEA) has increased its focus on race. In 2018, AEA published an issue of the American Journal of Evaluation (AJE) containing an entire section focused on the role of race and racism in evaluation. Considering AEA’s recent call for more intentionality in addressing issues of race and power in evaluation, the offer of a series of dialogues on race and class, their advocacy for addressing structural racism in the field, and the upcoming special issue of AJE, the possibilities for a CRT-informed evaluation practice may be opening further. Moving forward, CRT, as both a theoretical and methodological framework, may become more widely used in evaluation practice, as it has been in educational research since the mid-2000s.
While qualitative methodologies have been the primary space in which racially informed educational research resides, quantitative researchers also have begun to engage in efforts to account for the structural impact of race on educational systems. Some of these researchers and theorists have put forth an approach they have deemed “QuantCrit.” The introduction of QuantCrit may have particular significance for professional evaluators, as they frequently rely on mixed methods approaches in their efforts to provide feedback to program stakeholders. Garcia, Lopez, and Velez (2018) point to an emerging group of scholars who wish to incorporate CRT assumptions into their quantitatively oriented work. In doing so, they acknowledge that numbers are not neutral and that many of the categorizations used in much of quantitative research are based on socially constructed and racialized categories.
By beginning to view evaluation through a racial lens, evaluators would not only be reframing the analytical lenses through which they come to understand programs and their participants, but they would also be reconceptualizing what constitutes a successful program by consistently including racial equity as an outcome. Such an approach also would require evaluators to interrogate their own racial positioning and lenses throughout the evaluation process. This would mean that evaluators who incorporate CRT into their work must provide space for counter-storytelling and counternarratives (Solorzano & Yosso, 2002) on the part of stakeholders and must intentionally include stakeholders’ voices at all levels more thoroughly in evaluation planning, instrument development, data collection, data analysis, and reporting processes.
In this article, we have argued for an understanding of culturally responsive evaluation as a form of critical qualitative inquiry. Intentionally framing culturally responsive evaluation in this way allows us to better understand the approach as one that allows scholars- and evaluators-in-training, whose critical commitments motivate their work, to see the power and possibilities of using their developing research skills to advocate for and affect tangible changes within their own and other communities. One program that has explicitly sought to invite students to join the ranks of culturally responsive evaluators is the American Evaluation Association’s Graduate Education Diversity Internship (GEDI) program. Since its inception in 2004, the GEDI program has sought to make important contributions to building a more inclusive and diverse evaluation field by recruiting racially and ethnically diverse graduate students into the field of evaluation. GEDI students are trained in cultural competence and responsiveness and are placed in evaluation internships where they can put what they are learning into practice (Symonette, Mertens, & Hopson, 2014). The GEDI program has made significant inroads into diversifying the field of evaluation and in foregrounding culturally responsive evaluation methods. Moreover, as the program’s champions argue, increasing the number of evaluators and evaluation scholars from non-majority backgrounds can help move the field forward by fueling evaluation thinking about under-represented communities and culturally responsive evaluation and, most important, deepen the profession’s competence in work within culturally, racially, and ethnically diverse settings. Thus, we hope this article furthers efforts to attain those ends.2
American Evaluation Association. (2011). American Evaluation Association statement on cultural competence in evaluation. Fairhaven, MA: Author. Retrieved from https://www.eval.org/ccstatementFind this resource:
Canella, G. S., Perez, M.S., & Pasque, P. A. (2015). Introduction: Engaging critical qualitative science. In G. S. Cannella, M. S. Perez, & P. A. Pasque (Eds.), Critical qualitative inquiry: Foundations and futures (pp. 7–28). Walnut Creek, CA: Left Coast Press.Find this resource:
Denzin, N. K. (2015). What is critical qualitative inquiry? In G. S. Cannella, M. S. Perez, & P. A. Pasque (Eds.), Critical qualitative inquiry: Foundations and futures (pp. 31–50). Walnut Creek, CA: Left Coast Press.Find this resource:
Frazier-Anderson, P., & Bertrand Jones, T. (2015). An analysis of Love My Children: Rose Butler Browne’s contributions to culturally responsive evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 73–87). Charlotte, NC: Information Age.Find this resource:
Frierson, H. T., Hood, S., & Hughes, G. B. (2002). Strategies that address culturally-responsive evaluation. In J. Frechtling (Ed.), The 2002 user-friendly handbook for project evaluation (pp. 63–73). Arlington, VA: National Science Foundation.Find this resource:
Frierson, H. T., Hood, S., Hughes, G. B., & Thomas, V. G. (2010). A guide to conducting culturally-responsive evaluations. In J. Frechtling (Ed.), The 2010 user-friendly handbook for project evaluation (pp. 75–96). Arlington, VA: National Science Foundation.Find this resource:
Garcia, N. M., Lopez, N., & Velez, V. N. (2018). QuantCrit: Rectifying quantitative methods through critical race theory. Race, Ethnicity and Education, 21(2), 149–157.Find this resource:
Glesne, C. (2006). Becoming qualitative researchers: An introduction (3rd ed.). Boston: Pearson.Find this resource:
Hood, S. (2005). Culturally responsive evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 96–102). Thousand Oaks, CA: SAGE.Find this resource:
Hood, S., Hopkins, R., & Kirkhart, K. (2015). Culturally responsive evaluation. In K.E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (4th ed., pp. 281–317). Hoboken, NJ: Jossey-Bass.Find this resource:
Hopson, R. K. (2009). Reclaiming knowledge at the margins: Culturally responsive evaluation in the current evaluation moment. In K. Ryan & J. B. Cousins (Eds.), The SAGE international handbook of educational evaluation (pp. 429–446). Thousand Oaks, CA: SAGE.Find this resource:
Hopson, R., & Hood, S. (2005). An untold story in evaluation roots: Reid E. Jackson and his contributions toward culturally responsive evaluation at three quarters of a century. In S. Hood, R. Hopson, & H. Frierson (Eds.), The role of culture and cultural context: A mandate for inclusions, the discovery of truth, and understanding in evaluative theory and practice (pp. 85–102). Greenwich, CT: Information Age.Find this resource:
House, E. R. (2017). Evaluation and the framing of race. American Journal of Evaluation, 38(2), 167–189.Find this resource:
House, E. R. (2017). Evaluation and the framing of race. American Journal of Evaluation, 38(2), 167–189.Find this resource:
King, J. A., Nielsen, J. E., & Colby, J. (2004). Lessons for culturally competent evaluation from the study of a multicultural initiative. New Directions for Evaluation, 102, 67–80.Find this resource:
Noblit, G. W., & Jay, M. (2010). Against the majoritarian story of school reform: The Comer Schools evaluation as a critical race counternarrative[Special issue: M. Freeman (Ed.), Critical social theory and evaluation practice]. New Directions for Evaluation, 127, 71–82.Find this resource:
Parker, L. (2004). Commentary: Can critical theories of or on race be used in evaluation research in education. New Directions for Evaluation, 101, 85–93.Find this resource:
Patton, M. Q. (2002). Qualitative evaluation and research methods. Thousand Oaks, CA: SAGE.Find this resource:
Prasad, P. (2005). Crafting qualitative research: Working in the postpositivist traditions. New York: M. E. Sharpe.Find this resource:
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Thousand Oaks, CA: SAGE.Find this resource:
Sensoy, O., & DiAngelo, R. (2017). “We are all for diversity, but . . .” How faculty hiring committees reproduce whiteness and practice suggestions for how they can change. Harvard Educational Review, 87(4), 557–580.Find this resource:
Solórzano, D. G., & Yosso, T. J. (2002). Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative Inquiry, 8(1), 23–44.Find this resource:
Stake, R. E. (1976). A theoretical statement of responsive evaluation. Studies in Educational Evaluation, 2(1), 19–22.Find this resource:
Symonette, H., Mertens, D. M., & Hopson, R. (2014). The development of a diversity initiative: Framework for the Graduate Education Diversity Internship (GEDI) program[Special issue: P. M. Collins & R. Hopson (Eds.), Building a new generation of culturally responsive evaluators through AEA’s Graduate Education Diversity Internship program]. New Directions for Evaluation, 143, 9–22.Find this resource:
White, C.J. & Hermes, M. (2005). Learning to play scholarly jazz. In S. Hood, R. Hopson, & H. Frierson (Eds.), The role of culture and cultural context: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 105–128). Greenwich, CT: Information Age.Find this resource:
White, C. J., & Senese, G. (2015). Evaluating Alch’I’ni Ba/for the Children: The troubled cultural work of an indigenous teacher education project. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 251–272). Charlotte, NC: Information Age.Find this resource:
Zulli, R. A., & Frierson, H. T. (2004). A focus on cultural variables in evaluating an Upward Bound program. New Directions for Evaluation, 102, 81–94.Find this resource:
(1.) While this article centralizes race as a primary lens through which to illuminate culturally responsive evaluation’s purposes and commitments, the focus on race is not intended to minimize other categories through which people are marginalized (Sensoy & DiAngelo, 2017). Rather, it serves as an accessible and familiar heuristic device commonly adopted by both scholars of CRE and critical qualitative research, and thus lends itself well to our framing of CRE as a form of critical qualitative inquiry.
(2.) We wish to acknowledge Rodney K. Hopson and an anonymous reviewer for their helpful critique of earlier drafts of this article. Additionally, thanks to Rodney, Hopson, Stafford Hood, and Karen Kirkhart for granting permission to reproduce the figure and table in this article.