1-3 of 3 Results

  • Keywords: research evaluation x
Clear all

Article

In the past decade, ambitious plans for digital inclusion have been developed in Latin America. These plans included a strategy of massive and universal distribution of equipment at a 1:1 ratio to students at different levels of the education system (i.e., a computer for each student). The programs were accompanied by both policy analyses and independent studies hoping to account for the program’s successes and achievements. These studies can facilitate analysis of the orientations and dimensions of these investigations, considering them as practices of knowledge production that imply the construction of a perspective, as well as indicators and problems that make visible certain some aspects of policy and mask others. They constitute forms of problematizing the social, that is to say, the construction of themes or topics that return to a problem that requires attention, which many times are taken as a reflection of reality and not as the product of an predetermined evaluative perspective. It is also significant that among these evaluative studies, a number were conducted through qualitative perspectives, which facilitate more complex and plural approaches to the processes of technological integration and digital inclusion within the classroom. In these qualitative studies, the construction of categories was part of the research and covered a multiplicity of meanings that the policies took for the actors involved, thus opening a richer and potentially more democratic perspective on the construction of knowledge about educational policy in the region.

Article

How to judge the quality of a piece of research is a question that permeates academic work. A clear conception of quality is the backbone of the design and conduct of research, as well as the basis for supervision, manuscript review, and examination papers. Qualitative methodologies justify their usefulness through the development of more sophisticated interpretations of specific study objects than can be achieved within other methodologies. However, it is difficult to arrive at a decision concerning the quality of a piece of research because there are many aspects or facets to consider. Academics develop their competence for reaching decisions through experience, coupled with taking part in ways of reasoning about quality in qualitative research. Within this reasoning process there are a number of quality facets that should be considered. Each facet is grounded in various ways in different research traditions, and each has a foundational meaning but must also be assessed in relation to limitations. From the perspective of the text as a whole, one must consider whether it shows an awareness of the consequences of the research assumptions and other aspects of internal consistency. Ethical considerations are yet another facet. From a narrower perspective, the quality of the interpretation of empirical data must be able to capture rich meaning and present it in a structured way so that the interpretation can be clearly discerned. Another facet is how the interpretation contributes to existing knowledge about the issue or phenomenon that was studied. A classic question concerns the relation between an interpretation and the empirical basis, its credibility or validity. Here there are various facets to consider: Is the interpretation well anchored in data, and how do specific data fit when they are put together? Another facet is whether the interpretation opens up new ways of understanding an issue or process and whether more sophisticated action can result from embracing the interpretation. A proper evaluation of qualitative research should consider all facets; however, the ultimate appraisal rests on professional wisdom of the judge developed through experience and participation in deliberations related to what constitutes quality in academic texts. Determination of quality in qualitative research is not a mechanical process.

Article

Robert E. Slavin and Nancy A. Madden

The development, evaluation, and dissemination of Success for All (SFA)—a comprehensive improvement approach for elementary schools—is a story of how developers, coaches, researchers, and practitioners work together to implement this program. There is considerable formal research informing the program and its continual development. However, although there is reliance on rigorous, quantitative research methods in informing model development, there is also a very strong commitment to learn from teacher practice. SFA seeks a constant interplay between teachers’ practice and research. The knowledge SFA coaches bring to the table, many of whom were former SFA teachers, is also integral to the continual development of the model and its implementation strategies.