Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 21 September 2023

Enhancing Students’ Assessment Feedback Skills Within Higher Educationfree

Enhancing Students’ Assessment Feedback Skills Within Higher Educationfree

  • Carol EvansCarol EvansGriffith University
  •  and Michael WaringMichael WaringGriffith University


In higher education (HE) considerable attention is focused on the skills sets students need to meet the requirements of the fourth industrial revolution. The acquisition of high-level assessment feedback skills is fundamental to lifelong learning. HE has made significant investment in developing assessment feedback practices over the last 30 years; however, far less attention has been given to the development of inclusive agentic integrated assessment systems that promote student agency and autonomy in assessment feedback, and from an individual differences perspective.

“Inside the Black Box,” a seminal work, opened the potential of assessment as a supportive process in facilitating students in coming to know (understanding the requirements of a task and context, and their own learning) through the development of formative assessment. However, overall, the assessment for learning movement has not changed students’ perceptions, on entering HE, that feedback is something they receive rather than something they can generate and orchestrate despite being predicated on a self-regulatory approach. HE promotes students’ use of self-regulated learning approaches although these are not sufficiently integrated into curriculum systems. In moving forward assessment feedback, it is important to adopt a theoretically integrated approach that draws on self-regulatory frameworks, agentic engagement concepts, understanding of individual differences, and the situated nature of assessment.

Current emphases in HE focus on how we engage students as active participants in assessment, in coming to know assessment requirements as part of sustainable practices with students as co-constructors of assessment inputs and outputs. Assessment design should be challenging students to maximize their selective and appropriate use of assessment feedback skills for both immediate and longer-term learning gains. Addressing the professional development of lecturers and students in the acquisition and development of essential fourth industrial age assessment feedback competencies is fundamental to enhancing the quality of learning and teaching in HE.


  • Cognition, Emotion, and Learning


Student engagement with, and ownership of, assessment feedback is critical if students are to be the authors of their own assessment careers for life. Higher education (HE) should be supporting students to develop essential assessment feedback skills, if they are to be able to make sense of, and contribute effectively to, increasingly complex contexts brought about by the integration of physical, digital, and biological worlds as part of fourth industrial age requirements (Baker, 2016; McGinnis, 2018).

Here we highlight the importance of an integrated theoretical approach in seeking to better understand and develop students’ regulation of assessment feedback from cognitive, metacognitive, and affective perspectives, leading the field in bringing together multiple theoretical concepts implicated in assessment feedback skills to enhance understanding of the mechanisms involved and emphasize the need for an interdisciplinary approach in moving the field forward.

The importance of students’ self-regulation of assessment and agentic engagement with assessment in impacting student learning outcomes is foregrounded, accepting that there are competing dialogues and tensions in the realization of this ambition. While there is general agreement within HE of the need to fully engage students in the assessment process, to support their understanding of task requirements, quality, and criteria (Sadler, 2010), an implementation gap remains. Reasons for this gap include, but are not limited to, (a) an overemphasis on a technical approach to how lecturers can give better feedback, and how students can become better users of lecturer feedback; (b) assessment feedback being perceived as a rational process whereas in reality, individuals’ interpretation and enactment of assessment requirements are complex (Eva & Regehr, 2013); (c) institutional structural barriers to change, and a tendency to preserve institutionalized assessment cultures (Evans, Kandiko-Howson, & Forsythe, 2018; James, 2014); (d) overscaffolding of students’ learning fueled by students wanting to maintain the external regulation they have become familiar with in schools, abetted by a lack of familiarity with, and confidence in, the teaching of self-regulatory processes from the lecturer perspective (Peeters, De Backer, Kindekens, Triquet, & Lombaerts, 2016); and (e) emphases on HE accountability and student satisfaction that often result in quick measurable fixes which may undermine assessment quality (e.g., faster turnaround times for feedback compromising the quality of assessment, nature of tasks, and moderation of judgments) (Evans et al., 2018).

We are interested in how students shape and are shaped by the assessment context, and how through such understandings, the assessment environment can be modified to enhance students’ and lecturers’ agency and autonomy within it. We argue the importance of engaging students (and lecturers) in all aspects of the assessment process if they are to effectively calibrate measures of their own performance and utilize resources effectively to realize goals; this requires ownership of assessment and the internalization of assessment requirements. To support the development of this argument, we explore the role of students in the assessment process, the importance of self-regulation, and higher-order self-regulatory skills in developing assessment feedback competence. Using an integrated theoretical lens to capture the complexity of and interrelationships inherent in assessment feedback, we discuss relevant theoretical perspectives that drive self-regulatory behaviors and demonstrate how an understanding of these can be captured and developed through the use of an integrated theoretical and pragmatic framework, the EAT Equity, Agency, and Transparency in Assessment Framework. We conclude with key implications for policy and practice.

The Role of Students in the Assessment Feedback Process

In considering student assessment feedback skills, we are concerned with how students (and lecturers) navigate the assessment feedback landscape to maximize their understanding of assessment feedback, their engagement with it, and success in it, in relation to immediate and longer-term goals. Assessment feedback has multiple interpretations. Evans’s definition includes self-feedback, and the mechanisms involved in internalizing and making sense of feedback for oneself: “all feedback exchanges generated within assessment design, occurring within and beyond the immediate learning context, being overt or covert (actively and/or passively sought and/or received) and, importantly, drawing from a range of sources” (Evans, 2013, p. 71).

Assessment feedback skills involve internal and external self-regulatory processes. A holistic perspective and understanding of the integrated nature of assessment are required if learners are to be able to self-assess their performance. Learners need to have a good understanding of what constitutes good assessment feedback, be able to manage their cognitions and emotions, be discerning in their use of feedback, and have the necessary arsenal of strategies to deploy (Carless & Boud, 2018; Evans, 2016). Importantly, it is about knowing what self-regulatory strategies to use, when, how, and in what combinations to maximize individual and team effectiveness that HE needs to be cultivating in students and lecturers (Dinsmore, 2017). Students’ selective and discriminatory use of resources (internal and external) has a significant impact on their success in HE (Schneider & Preckel, 2017), and it is these self-regulatory skills sets we should be developing.

What is clear is that in “coming to know” the requirements of assessment, students’ active engagement in the development and design of assessment should enable them to have a much better understanding of what is required to do well (Sadler, 2010, 2013). To fully utilize feedback, students need to be able to understand the meaning of feedback and, more specifically, what it means within the discipline (van Heerden, Clarence, & Bharuthram, 2017) and be able to evaluate accurately the strengths and weaknesses of their own work. Sadler (2013) highlights the importance of “knowing what is worth noticing”; this is a key self-regulatory skill. Sadler’s argument is that feedback focused on telling students how to improve (i.e., assessment being done to students), equated with Mark 1 feedback of Boud and Molloy (2013), is of limited use in supporting students in internalizing the requirements of assessment. Instead, assessment efforts need to be directed toward supporting students to judge the quality of work for themselves (Boud and Molloy’s Mark 2 feedback). Greater investment is needed in assessment design that supports students in self-monitoring (in the moment on a specific task) and self-evaluating (involving more holistic judgments of capacity by utilizing self-feedback and the affordances from the immediate and wider learning environments). The notion of Sadler (2013) of students’ “knowing to” is critical. It emphasizes the importance of metacognitive skills, in knowing what you know and don’t know, being able to discern the correct approach in a specific context, knowing what tacit knowledge is needed and how to acquire it, and being able to deploy the right combination of skills to achieve one’s goals.

Drives to support students “knowing to” include student-lecturer co-authorship of assessment tasks and co-assessment of products/outcomes, where students are seen as active agents in assessment (Taras, 2015; Yu & Hu, 2017). Such emphases indicate a significant shift from teachers as the drivers of feedback to, instead, place the learner in the driving seat to maneuver assessment feedback for themselves. Emphasis on the development of self-regulatory skills to support learning beyond the immediate requirements of a course, but rather as part of lifelong learning, is embodied in definitions of sustainable assessment (Boud & Molloy, 2013). In other words, emphasis is placed on how assessment for learning can contribute to the future learning of a student through the development of key skills that include self-evaluative judgment as part of self-assessment (Eva & Regehr, 2011), through the use of carefully orchestrated assessment design (Boud, 2010; Boud & Soler, 2016; Carless, 2017).

How students are inducted into all aspects of the assessment process as active contributors is fundamental in influencing their assessment feedback skills. Emphasis on learning-oriented assessment (Carless, 2015) has elevated the role of students in the feedback process, and highlighted the importance of dialogue in coming to shared understandings of assessment requirements aligned with a socioconstructivist perspective (Nicol, 2010; Wegerif, 2015). There is, however, considerably less research on student roles in leading the generation of dialogue, and in creating assessments for themselves as part of agentic engagement with assessment.

In supporting student understanding of feedback within HE, much effort has been placed on training students in how to interpret feedback, with a generally shared perspective that for feedback to be useful, one needs to be able to do something with it (Boud, 2000; Nash & Winstone, 2017), even if that means ultimately rejecting it (Evans, 2016). The impact of such approaches is variable precisely for the reasons Sadler (2013) has articulated. Training students in feedback does not necessarily give them a better understanding of what requirements are, or how to assess their own performance. Such approaches may also reinforce a transmitting perspective of the role of feedback, and the traditional notion of “vessels to be filled,” rather than students as active generators and creators of knowledge through their deep engagement with assessment processes. In pursuing this line of argument, providing students with a rubric to support them in their work does not require a student to have a full understanding of the requirements of a task, whereas self-generation of a rubric does. In the case of the latter, students are required to get “underneath the assessment” to work out what is required, to break it down, and rebuild it in a shape that makes sense to them. In such a context, the development of the rubric becomes a critical element of assessment itself. The value of rubrics is also dependent on students and lecturers having similar perceptions of their purposes (Christie et al., 2015).

In engaging students with assessment, ipsative assessment has been found to be valuable in supporting learning, whereby emphasis is placed on the progress a student has made rather than focusing solely on attainment (Hughes, 2011; Hughes, Smith, & Creese, 2015; Scott et al., 2014). The efficacy of ipsative assessment is rooted in the impact it has on student motivation, self-monitoring, and evaluation capacity. However, ipsative assessment is dependent on assessment design being able to facilitate ongoing evaluation opportunities through the development of closely aligned and sequenced tasks, and training in learning how to judge the quality of one’s own work for oneself.

In summary, HE systems and processes need to support lecturers in taking a “leap” in promoting self-regulatory skill sets if they are to enable students to take this leap, and students also need to be convinced of the need for such a leap. Assessment needs to require student engagement and the utilization of high-level regulatory skills, and move away from a receipt approach focused on lecturer-orientated feedback models which have limited capacity to support student self-regulation (Brown, Peterson, & Yao, 2016; Orsmond & Merry, 2013; Sadler, 2013). To facilitate student understanding, it is imperative that all those involved in the delivery of a program of study have congruent understandings of quality and criteria, defined here as quality assurance literacy (Evans, 2016).

Self-Regulatory Assessment Feedback Skills

Self-regulation involves the “processes whereby learners personally activate and sustain cognitions, affects, and behaviors that are systematically oriented toward the attainment of personal goals” (Zimmerman & Schunk, 2011, p. 1). Self-regulation can also be seen as a set of phases (e.g., setting and/or interpreting the requirements of an assessment task, deciding on goals, implementing a plan, monitoring the effectiveness of assessment approaches in relation to goals and adapting where necessary, and reflecting on aspects of the task, self, and context). In reality, many of these phases are interwoven (Seufert, 2018). Holistically, self-regulation involves learner acquisition of cognitive (processing skills), affective (management of emotions), and metacognitive capacity (understanding of one’s learning processes, and in relation to the requirements of the context) (Vermunt & Verloop, 1999). Agentic engagement is very much part of self-regulation and involves being a savvy feedback seeker (Evans, 2012) capable of engaging in deep approaches to learning, strong self-management skills, perspective, noticing contextual sensitivities, and maximizing opportunities, resilience, personal responsibility, and adaptability. These dispositions acknowledge the importance of students’ abilities to use, apply, adapt, and create new knowledge, in line with research by Barnett (2011) on students’ management of complexity and by McCune and Entwistle (2011) on deep learning dispositions, such as a “willingness to offer.” The notion of agentic engagement as part of self-regulation, emphasizing the bidirectional nature of the relationship between a student and his or her learning environment, is very important. The potential of learners to modify their learning environment to make it more conducive to their learning, and to enlist support as necessary as part of shared and co-regulation, is potent in supporting students to create and develop effective learning networks (Montenegro, 2017; Reeve, 2013; Scott et al., 2014). In this context, shared regulation involves support from and interaction with others to enhance one’s own self-regulatory capacity, whereas shared regulation is used to refer to learners working together to regulate their collective learning (Hadwin, Järvelä, & Miller, 2018).

While there is extensive research on the relationships between individual difference variables, the environment, and student performance (Schneider & Preckel, 2017), such relationships are complex. Given the interaction of individuals with their environment and the ongoing ontological changes possible as part of ongoing dynamic adjustment, as individuals are impacted by the environment and also impact the environment, many different constellations of variables may be implicated in the ways that behavior is manifested and performance impacted. It means that supporting student self-regulatory development and engagement in assessment feedback has the potential to significantly impact student achievement where the tasks require such skills (Dorrenbacher & Perels, 2016; Panadero, 2017).

High-Level Self-Regulatory Skills

Dinsmore (2017) emphasizes the importance of focusing on the development of higher-level metacognitive actions that enable one to evaluate the quality and appropriateness of assessment feedback strategies being used. Table 1 highlights the metacognitive skills students need to manage assessment feedback effectively; they are linked to self-regulation stages from accurate task identification and plan activation to evaluation of progress and reflection on performance.

Table 1. Self-Regulatory Skills Implicated in Assessment Feedback

Self-regulatory behaviors in managing assessment and feedback


Metacognitive strategy use: knowing how, when, and where to deploy a strategy

Quality: how well a strategy is executed

Conditional use: how appropriately a strategy is used

Task analysis: accurate assessment of task and what you do and do not know

Meta-memory: memory of what you know

Accuracy for recognizing or knowing a task and predicting one’s knowledge

Planning regulation of a task: organizational and motivational skills in setting goals, understanding the necessary steps in the assessment process, and developing an action plan to achieve these goals.

Goal setting: grade goal (minimum level one wants to achieve); learning-oriented goals vs. performance goals; coherent goal hierarchy

Ability to set specific, manageable, and challenging mastery goals

Contextual regulation: ability to influence the environment to support learning

Selective use: knowing when, why, and from whom to seek support—cue seeking; help-seeking

Quality of, and selective use of networks of support

Flexibility: boundary crossing- adaptability- ability to transfer and adapt ideas across contexts

Metacognitive monitoring of cognitive, volitional (motivational and affective) states to support effort regulation and attention-focusing in pursuit of goals Ability to rely on own internal processes to make progress against goals and adapt one’s plan as necessary To self-monitor in the moment, and to monitor overall plan of activity

Adaptive control - flexible use of self-regulation strategies

Absolute accuracy in relation to expected and actual performance

Relative accuracy—being able to discriminate between the differential learning for some materials versus others

Availability and accurate use of predictive cues to measure progress

Best use of time: choosing deliberately when and where to invest time and mental resources

Self-reflection: - ability to critically reflect on one’s own performance and to be reflexive – to be able to see the situation from different perspectives – an “outward in a glance” – objective assessment of the situation.

Self-evaluative capacity: ability to accurately estimate one’s performance bringing together information from a range of sources

Accuracy in attributing the causes of success and/or failure

Specific assessment feedback abilities implicated in student success include students’ abilities to interrogate and internalize feedback from external sources, and their capacity to rely on their own internal processes to measure progress against goals (Brown et al., 2016). Therefore, the ability to provide an accurate holistic judgment of one’s own ability by effectively mentally aggregating performance over past events, along with rigorous continuous self-monitoring of progress against goals, is critical (Eva & Regehr, 2011). Volitional control, defined as the ability to monitor, evaluate, and modify one’s own emotional experiences (Schutz, Hong, Cross, & Osbon, 2006), impacts cognitive and metacognitive management of assessment feedback, although assessment design too frequently pays lip service to the relational dimension of assessment practices, despite its strong association with student learning outcomes (Schneider & Preckel, 2017).

The development of such high-level metacognitive skills is mediated by individual student characteristics and those of the assessment context. Figure 1 summarizes key individual student dispositions implicated in assessment outcomes. The figure presents a triadic symbiotic and dynamic relationship between the assessment context, the individual student, and assessment feedback skills, and is aligned with Bandura’s social cognitive theory (Bandura, 2001). The key interactions here are that the environment can mediate the impact of individual characteristics, and the individual can change the assessment context to maximize their potential, that is, if the assessment design promotes agency and autonomy. In reflecting on how these individual and contextual variables come together to impact assessment feedback behaviors, we need to be mindful about generalizing assessment feedback. Behaviors are not enacted in a vacuum, and assessment feedback actions contain both state and trait characteristics that also nod to past experiences and future expectations.

In trying to cultivate assessment feedback skills, self-regulatory processes can be viewed through cognitivist and information processing, socioconstructivist, and sociocultural/critical frameworks to name but a few. The following subsections articulate some of the key theoretical perspectives.

Figure 1. Triadic symbiotic relationship between individual and contextual factors and assessment feedback skills.

Cognitivist and Information Processing Perspectives

At the cognitivist end of the spectrum, cognitive processing and neuroscientific perspectives provide essential information in supporting assessment feedback processes. Cognitive load theory is important in explaining student access to and use of feedback. Cognitive load refers to the amount of resource that an individual can devote to dealing with one task given the limits of working memory capacity (WMC), which only allows a limited amount of information to be processed at one time (Sweller, Ayres, & Kalyuga, 2011). Information exceeding working memory capacity will not be processed and encoded into long-term memory (LTM) where it is needed, as LTM is not affected by capacity issues facing WMC and represents a relatively permanent bank of knowledge and information. Given the limits of available WMC, an individual’s capacity to manage the volume and nature of assessment feedback is especially compromised in new learning situations. Points of transitions, such as entry to HE, often present cognitively complex situations, where emotional load is high, and students’ prior knowledge of core content is limited. Multiple channels of information and large amounts of potentially distracting material impact students’ ability to notice and integrate assessment feedback (Waring & Evans, 2015).

On related lines, Friedlander et al. (2011) illuminated the impact of competing sources of information on students’ use of time, arguing that what learners choose to attend to is based on their assessment of what is most important to them at the time, given the need to continuously self-triage limited neural resources. Therefore, too much input and poorly placed feedback that students can’t see as directly relevant to future learning is not likely to garner their attention. In maximizing performance, the ability of learners to choose deliberately when and where to invest time and mental resources is key (Schneider & Preckel, 2017).

Cognitive styles are strongly implicated in the moderation of assessment feedback behaviors (Evans & Waring, 2011a, 2011b); however, they continue to receive limited attention in the assessment feedback literature. Cognitive style represents individual differences in cognition that help individuals adapt to the learning environment and influence how learners process and make sense of information. Cognitive style is shaped by an individual’s interaction with the environment and describes what people do when they are trying to learn. Learners vary in their ability to choose the most appropriate styles and strategies, and in their degree of flexibility. Styles directly impact how individuals navigate and filter information and how they interact with their environments. Considering interaction with other variables, gender and culture, for example, further demonstrates how cognitive styles impact assessment feedback behaviors and student learning outcomes (Evans & Waring, 2011a, 2011b).

How learners make sense of information is impacted by schema, established ways of thinking and organizing ideas (Bartlett, 1932); this has implications for how learners interact with new information. Information that does not fit with preexisting schema requires increased attention from the learner, and openness and confidence to adapt or reject preexisting ways of thinking and doing. To support such change, interventions need to challenge learners and present them with new perspectives on the content and/or themselves (Boudrias, Bernaud, & Plunier, 2014). Sense-making theory (Weick, 1995), that is, how individuals make sense of information, negotiate meanings, and choose actions they take to make sense of a situation, is relevant to understandings of how individuals navigate assessment environments. Providing students with time to work through cognitive conflicts/dissonance, that may result, for example, from transition into HE, is important in enabling students to experience difference and start seeking solutions for themselves before providing scaffolding. Blasco (2015) argues that it is at this point, when students are seeking explanations, that they may be more receptive to approaches that make assessment feedback practices more explicit. Induction support delivered by students and lecturers as an ongoing feature of assessment feedback interventions, mapped with students to identify important crunch points and rate limiting steps (identification of what is limiting progress), is important.

Students’ beliefs and values have a significant impact on their assessment feedback behaviors and the learning patterns they adopt (Vermunt & Donche, 2017). Students’ conceptions of their role in the feedback process, whether they see feedback as valuable, and what they want from feedback, impacts levels of engagement. Furthermore, engagement is also impacted by epistemological beliefs (belief in the certainty of knowledge vs. the view that knowledge can be developed), conceptions of learning (seeing learning as transactional, as knowledge to be acquired and reproduced, as opposed to understanding for oneself) (Marton, Dall’Alba, & Beaty, 1993), and conceptions of teaching (whether the roles of teacher and student in assessment are viewed as transactional and one-directional, the teacher providing knowledge to the student as recipient, or as transformative, assessment as a joint endeavor with students actively contributing to the process with lecturer as facilitator). Therefore, to enhance students’ assessment feedback skills we need to address their conceptions of it (Brown, Bice, Shaw, & Shaw, 2015). Constructivist learning approaches embodied within assessment for, and assessment as, learning initiatives highlight the importance of making the learning process explicit and open to the learner, with increasing adoption of the lecturer as facilitator. To enhance assessment feedback skills, the facilitator role needs to emphasize internal rather than external regulation to support student ownership of assessment.

Students’ motivations impact assessment and feedback skills and are linked to students’ perceptions of task value, goals, aspirations, agency, and autonomy, and to beliefs about one’s ability and expectations of positive outcomes (Pintrich, 2004). Motivation is the driving force behind many self-regulatory approaches. If students do not understand assessment requirements and cannot see the relevance of tasks they may not engage with assessment. Additionally, if perceptions of workload are too high, and/or students do not understand assessment requirements, perceptions of their capability (academic self-efficacy) may be influenced and cause negative impacts on student engagement. In such situations, self-regulatory processes may be used to protect learners’ sense of self rather than focusing on achievement.

Goal theories play a central role in contributing to understanding learner assessment feedback behaviors, and influence learners’ adoption of surface, strategic, and/or deep approaches to learning. For example, achievement goal theory differentiates between students seeking mastery of material (mastery goals) and/or those seeking to do better than others (performance goals) (Seifert, 2004). The standards we set for ourselves have a significant impact on assessment feedback behaviors (e.g., having a minimum grade goal, identifying the lowest grade a learner would find acceptable). The relative importance of immediate or future goals are implicated in assessment feedback interactions, and from a metacognitive perspective, the concept of future time perspective (FTP) impacts self-regulatory processes. FTP involves learners assessing the relative value of future goals compared to current ones, and the ability to anticipate the potential longer-term impact of current actions (Shell & Husman, 2008). The quality of goal setting matters. Friedlander et al. (2011, p. 417) highlight the importance of proximal goals (those within one’s grasp) and higher-level, longer-term distal goals in facilitating student engagement, strongly aligned with the Japanese notion of “ikigai”—taking pleasure in achieving immediate goals. Friedlander et al. argue that students who derive satisfaction from the more immediate goals of understanding have a greater chance of using the brain’s capacity to provide reward signals on an ongoing basis, to facilitate their learning. Implications for assessment design include the importance of regular opportunities for students to test their understanding as opposed to overreliance on sparsely distributed and high-stakes opportunities for reward.

The importance of goal setting in supporting successful assessment feedback behaviors is prevalent in the literature, with particular attention focused on the development of mastery goals, although performance goals can be highly successful in securing desired learning outcomes (Hawe & Dixon, 2017). Mastery goals work where the task requires them and have greater purchase for longer-term learning goals. Forsythe and Jelllicoe (2018) identified the importance of mastery approaches in fostering longer-term student success as part of sustainable assessment practices. Building on the work of Boudrias et al. (2014), Forsythe and Jelllicoe (2018) identified that assessment feedback strategies focused around goals, mindsets, and motivational intentions impacted students’ assessment feedback behaviors in enabling them to make changes to their approaches, while process-focused feedback strategies did not, contrary to the findings of Winstone, Nash, Parker, and Rowntree (2017). This highlights the complexity inherent in students’ assessment feedback decisions, where context is extremely important in impacting outcomes as well. Process-focused feedback characteristics include valence (extent to which the message is positive or negative), face validity (the extent to which the task and assessment of it accurately reflects a person’s efforts and view of their ability), and source credibility (trust in the feedback-giver’s ability to give accurate feedback). Process-focused feedback characteristics impact motivation, especially source credibility and perceived validity of the assessment context (Forsythe & Johnson, 2017). Winstone et al. (2017) confirm the importance of students’ self-appraisals, assessment literacy, goal setting, self-regulation, engagement, and motivation on their engagement with feedback, from the position of student as recipient. However, feedback orientation, as rated by others and not the self, has not been found to have a direct relationship with work-related outcomes (Braddy, Sturm, Atwater, Smither, & Fleenor, 2013).

The impact of goals on students’ assessment feedback skills is clearly established. Goals are also impacted by students’ need for cognition, tolerance of uncertainty, and capacity to manage complexity. The theory of desirable difficulties (Bjork, 1994) suggests learners must be placed in situations that elicit errors and make learning seem harder and less successful for learning to be optimized as this impacts students’ degree of attending, thus requiring they spend more time searching for information and resolving issues for themselves when answers are not quickly available. This is also linked to notions of productive failure as elaborated by Brown et al. (2016), referencing the work of Kapur (2008). It refers to deliberate orchestration of challenging learning contexts, where it is not easy to find correct solutions. For example, learners exposed to deliberately complex learning contexts in a managed way may do better than those who are not overly challenged. The need for challenging goals is also reinforced by Forsythe and Johnson (2017), given their importance in triggering cognitive and motivational resources. Challenging students in a constructive way to make them consider their position is aligned with self-regulatory notions of constructive friction, as described by Vermunt and Verloop (1999).

Social Cognitive Perspectives

Self-control theory is implicated in how individuals self-monitor as part of self-regulation (Carver & Scheier, 1998), and specifically as part of a negative feedback loop which starts with a perception phase in which individuals evaluate their context and progress in relation to goals; any discrepancy between the two leads to actions to reduce the discrepancy, resulting in changes (e.g., in strategy use, changing the environment) that are then re-evaluated in relation to social cognitive theory, stressing the interrelationship between an individual, the environment, and his or her behaviors. The effectiveness of such self-monitoring is fundamental—if no discrepancy is detected, no actions will be instigated—this is also heavily dependent on the nature of goals and the learner’s ability to accurately read the assessment context.

Self-efficacy, the belief in our ability to realize our goals, has a strong impact on assessment feedback behaviors (Bandura, 1997). In order to seek and be receptive to feedback, a degree of overconfidence is needed (Efklides, 2012). Self-efficacy is also linked to beliefs about the malleability of one’s own cognitive resources and beliefs about whether ability is fixed or can be developed; this draws on implicit person theory, whereby an individual’s assumption regarding the malleability of personal attributes (e.g., fixed or growth mindsets, according to Dwek, 1986), in turn, impacts goal setting and achievement and subsequent assessment feedback behaviors (Burnette, O’Boyle, VanEpps, Pollack, & Finkel, 2013). The situational nature of assessment enables students to exhibit both growth and fixed mindsets. Goal orientation and self-efficacy are impacted by learners’ control beliefs, defined in this context as one’s perceived control over assessment events.

Perceived control over assessment impacts assessment feedback skills and has a key effect on learners’ sense of agency and autonomy, which then impacts performance. Agentic control is central: “To be an agent is to influence intentionally one’s functioning and life circumstances” (Bandura, 2006, p. 164). Reeve (2013), in describing agentic engagement of students in learning, goes beyond traditional models of self-regulation which are closely connected to student independence in learning, to argue that an important element of agentic control is how students solicit help and rely on others. A key goal of agentic engagement (Reeve, 2013) is to create motivationally supportive learning environments that involve students in moderating their environments to make them more conducive to learning (e.g., in identifying and seeking out help, building networks, changing aspects of the immediate assessment environment, investing in co-regulation), which is consistent with social cognitive theory.

In exploring student agency and the relationship of perceived control within assessment, self-determination theory (SDT) and control value theory (CVT) are highly relevant. Self-determination theory is a composite theory that highlights the interaction between perceptions of autonomy and control, goal orientation/motivations, affect, locus of control (whether an individual feels outcomes are within their control), and expectancy of success (Deci & Ryan, 1995). The nature of this relationship is addressed by Panadero and Alonso-Tapia (2013, p. 557) who state, “The greater the sense of autonomy in the choice of criteria, the greater the student’s motivation to achieve them.” The relationship between autonomy and control is not straightforward, however, given the myriad of variables implicated. SDT also highlights the importance of autonomy and control, but also relatedness (the need to have meaningful relationships with others). Autonomy, control, relatedness, and perceptions of belonging (feeling connected to others as part of a specific community of learners) impact individuals’ intrinsic and extrinsic motivations and are directly implicated in assessment feedback behaviors. Relatedness is strongly aligned with humanistic perspectives that consider how assessment cultures are set up to enable engagement, emphasizing trust, fairness, dialogism, co-construction (Eva & Regehr, 2013; James, 2014), and psychological safety (Harrison, Könings, Schuwirth, Wass, & van der Vleuten, 2017); the latter being important where significant changes in beliefs and approaches are required. Trust in the quality of the source of feedback (source credibility) and in the face validity of assessment procedures impacts learners’ intentions to act on feedback (Boudrias et al., 2014).

Similarly, control value theory of achievement emotions (Pekrun, 2006; Shell & Husman, 2008) highlights the importance of learners’ perceptions of the value of a task and their perceptions of control (e.g., expectancy of successful outcomes, perceived competency) on achievement motivations that are mutually dependent on each other. The interaction of these variables impacts a learner’s choice of metacognitive and cognitive strategies to use to secure help with seeking and managing one’s environment, which then impacts performance, satisfaction, and motivations. Feedback, then, impacts emotions, perceptions of competency, task valuation, and goal orientation. Perceptions of high control are associated with mastery goal orientation and positive effect. Perceptions of a lack of control, tied up with insecurities around one’s abilities to be successful in a task, are outlined in self-worth theory (Covington, 2004; Craske, 1988); if great effort results in failure, students’ self-esteem can be seriously damaged. Creating a self-regulatory loop designed to minimize potential threats is central to protecting one’s self-esteem, which in assessment terms can mean not trying hard and not seeking feedback. Maladaptive behaviors can also demonstrate significant engagement, albeit relatively unproductive. Not attending to feedback may also be an effective self-regulatory behavior, in service of defending one’s sense of self.

Low self-efficacy beliefs regarding one’s potential for success with a task, linked to perceptions of low control as exemplified by learned helplessness theory, also prevents students’ from engaging with a task. Educational implications include how starting points for all students are determined, how to promote low-stakes assessment tasks that enable all students opportunities to contribute, and how to avoid early exclusion from the assessment process. Importantly, with both self-worth and learned helplessness dispositions, how individuals attribute success and failure is important (Weiner, 1985). Attributions relate to how one explains success and failure, and are linked with locus of control (whether control can be facilitated by the self or is dependent on others), the stability of the cause (e.g., perceptions of ability to improve, quality of instruction by others, etc.), and controllability (an individual’s perception of his ability to affect the cause of the outcomes).

Assessment feedback behaviors are heavily impacted by the design of the assessment environment. Importantly, especially from social justice perspectives, assessment design has the power to mediate the role individual difference variables play in affecting assessment outcomes. Put plainly, assessment design can level the playing field in reducing gaps in attainment outcomes between students identified as more disadvantaged in learning than their counterparts (e.g., low socioeconomic class, disability, ethnicity) (Evans et al., 2019; Schneider & Preckel, 2017). How individuals interpret the requirements of a learning context matters. Sociocultural theories aligned with self-regulatory perspectives can help support understanding underpinning processes, and can highlight the interactionist nature of learning contexts (James, 2014). Students’ approaches to learning, whether deep or surface approaches, are impacted by their perceptions of the environment, from a self-regulatory capacity, and their strategic competence, in terms of their ability to adopt meaning-directed learning approaches (Vermunt & Donche, 2017).

Effective feedback-seekers are internally and externally cue-conscious and navigate learning cultures successfully. James (2014) argues that what matters is how different learning (assessment) cultures facilitate or limit learning possibilities for individuals. This approach is aligned with that of a critical pedagogic one, which in the context of assessment explores who is advantaged and disadvantaged by certain practices and why, in order to promote more adaptive and universal design (UD)-informed assessment cultures (Waring & Evans, 2015). Bourdieu’s “reflexive sociology” is applicable here in encouraging recognition of one’s biases, beliefs, and assumptions in enacting assessment. To assist the navigation of local learning cultures, Blasco (2015) argues the importance of explicit learning theory, for example, making the requirements of assessment explicit rather than adapting to accommodate diverse learning styles, which is aligned with UD perspectives and the technical perspective outlined by James (2014). Given that students navigate modules, teams, and departments, all with specific assessment cultures, how well students adapt and cope with differing assessment emphases matters (Burger, 2017). To support students’ development of assessment feedback skills, HE needs to understand and address the complexity of assessment messages that students must navigate as part of their HE experiences.

Socioconstructivist and Sociocultural Perspectives

Socioconstructivist perspectives have placed considerable emphasis on the power of feedback to support student learning (assessment for, and as, learning initiatives) over the last 30 years, focusing on supporting learners’ understandings of the requirements of assessment by making the learning process explicit (Black & Wiliam, 1998). However, overscaffolding of assessment can lead to student dependence rather than the independence intended. Hattie and Timperley (2007) reinforced the potential of feedback to enhance learning, especially if focused on demonstrating to students how to improve through attention to Vygotskian principles of addressing the “zone of proximal development,” by using feedback to enable individuals to close the gap between current and desired achievement. However, DeNisi and Kluger (2000) evidenced the messiness of feedback in that it does not always lead to enhanced performance because it is often received at the personal level rather than the task level to which it was intended, and may have limited feedforward potential.

Assessment feedback skills are dependent on an individual’s social, cultural, and political capital (Bourdieu & Passeron, 1990; Butin, 2005), as well as boundary-crossing skills (Wenger, 2000, 2009). Bourdieu’s notions of habitus (socialized norms or tendencies that guide behavior and thinking) and field, interpreted as the specific context (e.g., higher education, discipline), are important. Cultural capital acts as a form of currency which enables entry to, and progression within, established ways of working and reinforces certain values and norms. Making those practices transparent and providing training in navigating such cultures is essential in participatory inclusive assessment communities. Many of the potential barriers to access are obscure and unknown to those with accumulated and valued cultural capital, which then serves to reinforce hierarchies and established ways of knowing and being within communities. The notion of fields, akin to Wenger’s communities of practice, constitutes different networks or groups that individuals inhabit and where the currency of cultural capital may vary (Bourdieu & Passeron, 1977). From a sociocultural perspective, how individuals navigate across different fields/groups, knowing which cache to take from each one, and knowing what to leave behind determines their ability to operate successfully in a specific context; situational learning theory is important in this respect. In supporting the development of assessment feedback behaviors, understanding the situational nature of assessment is central (Dayal & Lingam, 2015; Lave & Wenger, 1991; Wenger, 2009).

Using an Integrated Framework to Support the Development of Assessment Feedback Skills

The proliferation of research-informed guidance on assessment and feedback over the last 20 years has led to clear general guidelines and principles regarding what constitutes good assessment practices (Carless & Boud, 2018; Evans, 2013; Nicol & MacFarlane-Dick, 2006). Translating such principles into practice in an inclusive way, mindful of the local cultures in which assessment plays out, and considering individual differences and their effects on understandings, constitutes the art of assessment. Understanding the makeup of the student group, disciplinary needs, and situational demands in maximizing engagement with assessment feedback is essential (Orsmond & Merry, 2017; Waring & Evans, 2015).

The feedback button needs resetting. The current obsession with the feedback recipience process (what constitutes good feedback and how to get students to act on it) needs recalibrating to enable a more appropriate focus on how assessment design enables immersive experiences in which students can be fully conversant with all aspects of the assessment process. This comes from the “coming to know” that Sadler (2013) describes, and from the exploration by Evans (2016) of a participatory, inclusive, and integrative framework whereby the assessment feedback process is designed around inducting students into all aspects of assessment practices, drawing on an integrated theoretical framework that brings together self-regulation, individual differences, universal design, and agentic engagement principles. Building on the work of Sadler (2017), the need to attend to a number of core dimensions to support the development of assessment feedback skills is evident in the focus by Carless and Boud (2018) on student appreciation of feedback value, student role in the process, development of the ability to evaluate the quality of his or her work, managing affect, and taking action.

Evans (2016) argues for a more proactive and agentic approach to the development of assessment feedback competencies that is centered on inclusive practices and student and lecturer belief systems in the adoption of an integrative assessment approach. This approach is predicated on promoting student ownership of assessment while acknowledging that lecturers need to have full ownership of assessment themselves if they are to work with students to fully induct them into the academy.

Confirming over 4,000 research studies and emanating from her systematic literature review on assessment feedback (Evans, 2013), the EAT (Evans Assessment Tool Framework) incorporates 12 interconnected areas of practice designed to support students in “knowing assessment” in order to self-drive it. Underpinned by an inclusive participatory pedagogy, the framework highlights the importance of transparency to facilitate shared understandings of process and standards. It promotes student partnership and agentic engagement in all assessment decisions and shines a spotlight on the importance of the development of self-regulatory skills in enabling engagement. Critically, the framework demonstrates the interrelationships between all assessment dimensions, for example, if an individual has no conception of what quality looks like, he/she cannot judge the quality of his/her own work. Fundamentally, EAT seriously questions myopic approaches to assessment solutions through an emphasis on a holistic approach. In addressing questions about students’ use of feedback, EAT asks about the placement of feedback, whether a student has sufficient knowledge to use the feedback, whether there are shared conceptions regarding feedback’s purpose, the goals the learner is seeking to attain, the inherent quality of feedback, and its perceived usefulness. Figure 2 demonstrates a student version of the framework, with students being asked to self-score their relative engagement in the 12 dimensions (1 = little effort to 5 = maximum effort); engagement is also dependent on affordances provided by the curriculum.

Figure 2. The EAT Framework Student Scoring Version (Evans, 2016).

In addressing assessment literacy, the focus is on understanding what good looks like and what assessment criteria mean. To enable shared understandings of assessment criteria, co-construction of assessment criteria is advocated. But students also need to know the relevance of criteria to different parts of their assessment journeys. As part of a constructivist approach, the purpose and behaviors required at each step in the assessment process, the nature of the relationships among the different steps, and the logics underpinning how things are done need explication (Blasco, 2015). Students require a route map to create their own mental map of how all elements of assessment fit together, enabling them to use their own internal navigation system to work out the most efficient route by establishing clear goals (Chan & Sherbino, 2015). Assessment literacy is about clarifying roles and responsibilities in the assessment process and addressing the ingrained practice of student as receiver of feedback rather than as active initiator of feedback. In examining assessment literacy understandings, it is clear that lecturers must consider how judgments are made, and evaluate the integrity of assessment criteria as aligned with the notion of the calibrated academic (Sadler, 2017).

The integrated framework requires interrogation of the rules and language of the discipline and how students are inducted, and induct themselves, into assessment communities. In exploring assessment feedback, EAT requires explicit discussion around the purposes, positioning, and sources of feedback to enable individuals to best develop internal feedback processes. It is about providing early opportunities for students to test their understandings, which may be engineered through participatory peer feedback conducted to ensure authenticity, transparency, and fairness, while preserving individual autonomy; this requires training. It is through multiple opportunities, challenge initiatives, modeling of alternative approaches, and frequent authentic opportunities to practice that students’ self-assessment capacity can be developed. The integrity of feedback, that is, the extent to which it is consistent, aligned with assessment criteria and desired learning outcomes, and consistently interpreted at the intra- and inter-level, is open to scrutiny. Acknowledging the interconnectivity of all the EAT dimensions in impacting the efficacy of feedback is critical.

To provide the challenging environment needed to support self-understandings and assessment feedback capacity, assessment design is crucial. Assessment design requires HE to provide opportunities for students to calibrate standards through awareness and ownership of assessment conventions, through meaningful tests of competence that promote deep approaches to learning, through adaptive environments that provide all students with equal opportunities to be successful, by providing opportunities that promote student agency and autonomy in assessment, and by encouraging shared understandings and development of curriculum in the moment and for the future. Assessment must support students in applying self-regulatory processes in the workplace and provide linked assessments with HE to bring the outside in (Bass, 2012; Das, 2012; Dawson et al., 2019).

In supporting students’ development of assessment feedback skills attending to metacognitive, cognitive, and affective regulation strategies is essential and needs to be incorporated into assessment design. Being taught about when, why, how, and which strategy to use is effective in supporting students’ planning of assessment tasks. Metacognitive strategy instruction coupled with motivational strategy development, including a focus on task value, is effective in supporting students’ assessment feedback behaviors (Dignath, Büettner, & Langfeldt, 2008; Donker, de Boer, Kostons, Dignath-van Ewijk, & van der Werf, 2014). Addressing monitoring and evaluation skills impacts students’ self-regulation capacity (Hawe & Dixon, 2017; Panadero & Alonso-Tapia, 2013).

To support the development of students’ assessment feedback skills, a developmental approach that facilitates students’ accruement of skills through observation and modeling, repeated practice, and feedback is advocated. Students need opportunities to practice self-assessment so that they can internalize strategies, improve their capacity to realize a task, and have opportunities to improve (Orsmond & Merry, 2013; Panadero & Alonso-Tapia, 2013). An early focus on goal setting, explicit exploration of feedback, and consideration of emotions associated with challenging feedback is promoted (Forsythe & Johnson, 2017). Similarly, Gross (2001), with an emphasis on emotional regulation of feedback, highlights the importance of working with students to help them explore uncomfortable feedback, the different meanings attached to assessment feedback practices, and self-management of their responses.

Activities that promote students’ self-analysis, such as problem-solving, simulations, inquiry-based learning, critical thinking, and project- and product-based learning rooted in authentic learning experiences, have been found to be effective, especially where students are taking the lead in demonstrating how to improve (Bass, 2012; Garcia, 2014; Yu & Hu, 2017). Modeling self-regulation processes is helpful in exposing students to different strategic approaches and enabling them to practice and develop these strategies to suit specific contexts (Bembenutty, White, & Vélez, 2015). Students’ level of engagement in authentic tasks is critical in providing them with repeated exposure to refining self-regulatory processes for themselves. Even if all principles of effective assessment feedback are implemented, students may not make good use of them (Evans, 2013). Students need support in utilizing cues. Key assessment supports need to be clearly signposted (Dargusch, Harris, Reid-Searl, & Taylor, 2017).

Assessment literacy tasks that encourage students to interrogate the meaning of assessment, such as peer feedback, have an impact on attainment (Schneider & Preckel, 2017). It is thought that it is not the peer activity itself but being engaged in specific activities that support students in coming to understand their own determination of quality ; this is essential to accurately judge the quality of their own work. This can include, for example, students’ personalizing and creating their own criteria for each piece of work (Taras, 2015); being trained in using, triangulating, and making sense of feedback to include its emotional dimension (Forsythe & Johnson, 2017); reviewing work of varying quality to support students’ understanding of quality and how it can be achieved in different ways (Sadler, 2010, 2013); acting as reviewers of others (Nicol, Thomson, & Breslin, 2014); self-assessing and feeding back to others as part of summative assessment and evaluative processes (Boud, 2000; Boud, Lawson, & Thompson, 2013; Deeley, 2014); working with assessment to do the noticing, the thinking about repair and modification, and the generation of ways to improve (Sadler, 2013); leading discussions as part of dialogic exchange (Carless, Salter, Yang, & Lam, 2011); designing assessment with lecturers (Speirs, Riley, & McCabe, 2017); and teaching and researching with peers and lecturers (Evans et al., 2018; Scott, Moxham, & Rutherford, 2013). For students, coming to know the benefits of repeated testing on learning outcomes is well-known (Brown et al., 2015). Repeated testing is valuable in enabling students to recalibrate their positions around what they do and do not know, to inform where they need to go, and to support the development of important critical self-reflection skills (Hanstedt, 2015).

Training in the use of volitional strategies impacts students’ assessment feedback competencies and can reduce cognitive load (Jakhelln, 2011). Assessment feedback is emotionally loaded; emotions impact cognitive processing and metacognitive functioning (Evans, 2013). Ignoring the emotional dimension of assessment feedback in design and delivery is negligent. Fear permeates feedback giving (fear of not knowing enough and being exposed and fear of upsetting others) and feedback receipt (impact on identity and feelings of competence, especially when stakes are high). Drawing on control-value theory (Perkun, 2006) strategies to support emotional self-regulation include (a) supporting students’ sense of self through engaging them fully in assessment decisions and ensuring clarity about how all elements of assessment fit together; (b) attending to value (e.g., ensuring assessment is authentic and relevant); (c) supporting autonomy and cooperation through assessment design that allows students to take the lead and trains them in how to support others; (d) working with students to clarify goals and expectations; (e) management of constructive feedback; and (f) explicitly addressing students’ appraisals and emotions around their own performance. In attending to these strategies, Ludvik (2018) highlights the importance of coaching students to support their reflective skills and provide them with ongoing opportunities to regulate their attention and emotions in a more interwoven approach to overcome current deficits in assessment practices. To counter fear in assessment, Chapman (2017) argues the value of early low-stakes assessment, assessment design favoring more frequent assessment, and mentoring to support isolation and belonging issues.

Implications for Research and Practice

The importance of developing students’ assessment feedback skills is a priority for HE.

Currently, much assessment practice aimed at being transformational falls short because of entrenched personal and collective beliefs which encourage adherence to existing organizational paradigms (James, 2014; Taras, 2015). But as noted by Harrison et al. (2017), changing assessment cultures is not enough; a climate of psychological safety for students and lecturers needs to be created to facilitate the incorporation of varying assessment perceptions to enable agentic engagement by all.

Developing students’ self-regulatory assessment feedback skills is important in enhancing immediate performance and lifelong learning competence. A focus on skills development has the capacity to address differential student learning outcomes. It is therefore not an option but a necessity. The issue is one of implementation, which requires a more nuanced understanding of the interaction between skills development and individual differences. Different groups of students do not benefit equally from self-regulated learning assessment interventions (Dörrenbächer & Perels, 2016).

To facilitate effective assessment learning communities, organizational and individual beliefs need to align. Beliefs and conceptions about the nature of knowledge frame how learning experiences are designed and interpreted, leading to entrenchment of positions. To support sustainable assessment practices that build students’ self-regulatory capacity, and particularly their self-assessment (self-monitoring and self-evaluative judgment), more attention needs to focus on the development of shared principles underpinning assessment design (Evans, 2016). Seeking congruence in student, lecturer, and organizational beliefs and values must be a priority if students and lecturers are to work in partnership to develop essential assessment feedback skills (Evans, 2016, 2018).

Assessment feedback is not a rational process and is affected by individual differences and contextual affordances. Assessment design matters. Efforts need to be directed toward supporting students in internalizing feedback for themselves, rather than focusing on students’ relative lack of utilization of external feedback (Sadler, 2013, 2017). While feedback can be extremely powerful if targeted and placed appropriately, student dependence on feedback from others represents a deficit assessment model, and one that needs to be addressed if HE assessment practices are to move forward. More attention must be given to identifying appropriate challenges for learners to continuously test their own understanding, mindful of managing emotional issues intrinsically tied to assessment feedback.

Emphasis should be on how students are supported in understanding all elements of the assessment process and their valuing of the assessment, and not purely on whether they use the feedback (Sadler, 1989). Students need multiple opportunities to calibrate where they are in the assessment process, with feedback attuned to support such developmental understanding. Assessment feedback is complex and requires an integrated approach that engages students as designers of their own assessment journeys (Evans, 2016; Fielding & Regehr, 2017). In exploring students’ assessment feedback skills, the links between self-efficacy, student goal orientation, and affect are especially strong and implicated in academic success and social-emotional well-being (van der Zanden, Denessen, Cillessen, & Meijer, 2018). Given their impact on students’ learning outcomes, understanding and use of self-regulatory approaches to assessment can support development of key skills, including important relational skills, building social connections, and feelings of belonging (Bliuc, Ellis, Goodyear, & Hendres, 2011; van der Zanden et al., 2018).

We need to know more about how different constructs combine to impact students’ approaches to assessment and how to develop high-quality metacognitive self-regulatory skills, as well as the role individual differences play in this. Students may use varying combinations of self-regulatory approaches to achieve results, which may or may not ultimately be different (Dinsmore, 2017). Making better use of institutional data on students’ navigation of assessment—what they do and do not do with assessment opportunities—is important (Dayal & Lingham, 2015). Exploring the intersectionality of student characteristics and assessment practices, and teaching students how to make use of data to calibrate where they are and where they need to go, has considerable power.

In addressing the self-regulatory knowledge-application gap in assessment feedback, the expectations of students as drivers of assessment needs to be made explicit in program design (Dargusch et al., 2017), supported by a coordinated, integrated approach to assessment at the institutional level (Evans & Bunescu, 2020). High-quality training within the disciplines to support the design and development of high-level assessment feedback self-regulatory competencies is required for the fourth industrial age; and meaningful assessment is critical to this endeavor (Rogers-Shaw, Carr-Chellman, & Choi, 2018).

Assessment feedback is complex and deserves a highly sophisticated approach to the cultivation of assessment feedback skills within the HE assessment feedback landscape (Evans, 2013). An integrated approach bringing together multiple theoretical perspectives across paradigms and disciplines is advocated to address this.

Further Reading

  • Boerkaerts, M., Pintrich, P. R., & Zedner, M. (2000). (Eds.). Handbook of Self-regulation. London, UK: Elsevier.
  • Evans, C., & Waring, M. (2009). The place of cognitive style in pedagogy: Realizing potential in practice (pp. 169–208). In L. F. Zhang & R. J. Sternberg (Eds.), Perspectives on the nature of intellectual styles. New York, NY: Springer
  • Joughin, G. R. (2009). Assessment, learning and judgement in higher education: A critical review. In G. R. Joughin (Ed.), Assessment, learning and judgement in higher education (pp. 13–27). Dordrecht, The Netherlands: Springer.
  • Karagiannopoulou, E., & Christodoulides, P. (2005). The impact of Greek university students’ perceptions of their learning environment on approaches to studying and academic outcomes. International Journal of Educational Research, 43(6), 329–350.
  • Kirschner, P. A. (2002). Can we support CSCL? Educational, social and technological affordances for learning. In P. Kirschner (Ed.), Three worlds of CSCL: Can we support CSCL (pp. 7–47). Heerlen: Open University of the Netherlands.
  • Koole, S. L. (2009). The psychology of emotion regulation: An integrative review. Cognition and Emotion, 23(1), 4–41.
  • Maitlis, S., & Christianson, M. (2014). Sensemaking in organizations: taking stock and moving forward. The Academy of Management Annals, 8(1), 57–125.
  • O’Donovan, B. (2017). How student beliefs about knowledge and knowing influence their satisfaction with assessment and feedback. Higher Education, 74(4), 617–633.
  • O’Donovan, B., Rust, C., & Price, M. (2016). A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in Higher Education, 41(6), 938–949.
  • Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4–13.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144.
  • Scott, D., & Evans, C. (2015). The elements of a learning environment. In D. Scott & E. Hargreaves (Eds.), The Sage handbook of learning (pp.189–202). London, UK: Sage.
  • Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312.
  • Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76, 467–481.
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
  • Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (1st ed., pp. 13–39). San Diego, CA: Academic Press.


  • Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: W. H. Freeman.
  • Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52, 1–26.
  • Bandura, A. (2006). Guide for creating self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (Vol. 5, pp. 307–337). Greenwich, CT: Information Age.
  • Barnett, R. (2011). Learning about learning: A conundrum and a possible resolution. London Review of Education, 9(1), 5–13.
  • Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge, UK: Cambridge University Press.
  • Bass, R. (2012). Disrupting ourselves: The problem of learning in higher education. EDUCAUSE Review, 47(2), 23–33.
  • Bembenutty H., White, M. C., & Vélez, M. R. (2015). Self-regulated learning and development in teacher preparation training. In H. Bembenutty, M. C. White, & M. R. Vélez (Eds.), Developing self-regulation of learning and teaching skills among teacher candidates (pp. 9–28). New York, NY: Springer.
  • Bjork, R. A. (1994). Memory and meta-memory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (1st ed., pp. 185–205). Cambridge, MA: MIT Press.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.
  • Bliuc, A.-M., Ellis, R. A., Goodyear, P., & Hendres, D. M. (2011). Understanding student learning in context: Relationships between university students’ social identity, approaches to learning, and academic performance. European Journal of Psychology of Education, 26, 417–433.
  • Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.
  • Boud, D. (2010). Relocating reflection in the context of practice. In H. Bradbury, N. Frost, S. Kilminster, & M. Zukas (Eds.), Beyond reflective practice: New approaches to professional lifelong learning (pp. 25–36). London, UK: Routledge.
  • Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education, 38(8), 941–956.
  • Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712.
  • Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413.
  • Boudrias, J.-S., Bernaud, J.-L., & Plunier, P. (2014). Candidates’ integration of individual psychological assessment feedback. Journal of Managerial Psychology, 29(3), 341–359. doi: 10.1108/JMP-01-2012-0016
  • Boudieu, P., & Passeron, J. C. (1977). Reproduction in education, society, and culture. Beverly Hills, CA: Sage.
  • Bourdieu, P., & Passeron, J.-C. (1990). Reproduction in education, society and culture (2nd ed.). Translated by R. Nice. London, UK: SAGE.
  • Braddy, P. W., Sturm, R. E., Atwater, L. E., Smither, J. W., & Fleenor, J. W. (2013). Validating the feedback orientation scale in a leadership development context. Group & Organization Management, 38(6), 690–716.
  • Brown, G. A., Bice, M. R., Shaw, B. S., & Shaw, I. (2015). Online quizzes promote inconsistent improvements on in-class test performance in introductory anatomy and physiology. Advances in Physiology Education, 39(2), 63–66.
  • Brown, G. T. L., Peterson, E. R., & Yao, E. S. (2016). Student conceptions of feedback: Impact on self-regulation, self-efficacy, and academic achievement. British Journal of Educational Psychology, 86(4), 606–629.
  • Burger, R. (2017). Student perceptions of the fairness of grading procedures: A multilevel investigation of the role of the academic environment. Higher Education: The International Journal of Higher Education research, 74(2), 301–320.
  • Burnette, J. L., O’Boyle, E. H., VanEpps, E. M., Pollack, J. M., & Finkel, E. J. (2013). Mind-sets matter: A meta-analytic review of implicit theories and self-regulation. Psychological Bulletin, 139(3), 655–701.
  • Butin, D. W. (Ed.). (2005). Service-learning in higher education: Critical issues and directions. New York, NY: Palgrave Macmillan.
  • Carless, D. (2015). Excellence in university assessment: Learning from award-winning practice. London, UK: Routledge.
  • Carless, D. (2017). Scaling up assessment for learning: Progress and prospects. In D. Carless, S. M. Bridges, C. K. Y. Chan, & R. Glofcheski (Eds.), Scaling up assessment for learning in higher education (pp. 3–17). Singapore: Springer.
  • Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.
  • Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.
  • Carver, C. S., & Scheier, M. F. (1998). On the self-regulation of behavior. New York, NY: Cambridge University Press.
  • Chan, T., & Sherbino, J. (2015). The McMaster modular assessment program (McMAP): A theoretically grounded work-based assessment system for an emergency medicine residency program. Academic Medicine: Journal of the Association of American Medical Colleges, 90(7), 900–905.
  • Chapman, A. (2017). Using the assessment process to overcome imposter syndrome in mature students. Journal of Further and Higher Education, 41(2), 112–119.
  • Christie, M. F., Grainger, P., Dahlgren, R., Call, K., Heck, D., & Simon, S. (2015). Improving the quality of assessment grading tools in Master of Education courses: A comparative case study in the scholarship of teaching and learning. Journal of the Scholarship of Teaching and Learning, 15(5), 22–35.
  • Covington, M. V. (2004). Self-worth theory goes to college: Or do our motivation theories motivate? In D. M. McInerney & S. Van Etten (Eds.), Big theories revisited (pp. 91–114). Greenwich, CT: Information Age.
  • Craske, M.-L. (1988). Learned helplessness, self-worth motivation and attribution retraining for primary school children. British Journal of Educational Psychology, 58(2), 152–164.
  • Dargusch, J., Harris, L. R., Reid-Searl, K., & Taylor, B. A. (2017). Creating first-year assessment support: Lecturer perspectives and student access. Distance Education, 38(1), 106–122.
  • Das, S. (2012). On two metaphors for pedagogy and creativity in the digital era: Liquid and solid learning. Innovations in Education & Teaching International, 49(2), 183–193.
  • Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education, 44(1), 25–36.
  • Dayal, H. C., & Lingam, G. I. (2015). Fijian teachers’ conceptions of assessment. Australian Journal of Teacher Education, 40(8), 43–58.
  • Deci, E. L., & Ryan, R. M. (1995). Human autonomy: The basis for true self-esteem. In M. H. Kernis (Ed.), Efficacy, agency, and self-esteem (pp. 31–49). New York, NY: Plenum Press.
  • Deeley, S. J. (2014). Summative co-assessment: A deep learning approach to enhancing employability skills and attributes. Active Learning in Higher Education, 15(1), 39–51.
  • DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness: Can 360-degree appraisals be improved? Academy of Management Executive, 14(1), 129–139.
  • Dignath, C., Büettner, G., & Langfeldt, H.-P. (2008). How can primary school students learn self-regulated learning strategies most effectively? A meta-analysis on self-regulation training programmes. Educational Research Review, 3(2), 101–129.
  • Dinsmore, D. L. (2017). Examining the ontological and epistemic assumptions of research on metacognition, self-regulation and self-regulated learning. Educational Psychology: An International Journal of Experimental Educational Psychology, 37(9), 1125–1153.
  • Donker, A. S., de Boer, H., Kostons, D., Dignathvan Ewijk, C. C., & van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: A meta-analysis. Educational Research Review, 11, 1–26.
  • Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41(10), 1040–1048.
  • Eva, K. W., & Regehr, G. (2011). Exploring the divergence between self-assessment and self-monitoring. Advances in Health Sciences Education: Theory and Practice, 16(3), 311–329.
  • Eva, K. W., & Regehr, G. (2013). Effective feedback for maintenance of competence: From data delivery to trusting dialogues. Canadian Medical Association Journal, 185(6), 463–464.
  • Evans, C. (2012, August). The emotional dimension of feedback. Linking multiple perspectives on assessment (European Association for Research on Learning and Instruction Conference, EARLI SIG 1 meeting: Assessment and Evaluation). Brussels, Belgium.
  • Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70–120.
  • Evans, C. (2016). Enhancing assessment feedback practice in higher education: The EAT framework. Southampton, UK: University of Southampton.
  • Evans, C. (2018). A transformative approach to assessment practices using the EAT framework presentation. Small scale projects in experimental innovation. Southampton, UK: University of Southampton, Higher Education Funding Council for England (HEFCE).
  • Evans, C., & Bunescu, L. (Eds.). (2020, March). Student assessment: Thematic peer group report (Learning and Teaching Paper No. 10). Brussels, Belgium: European University Association.
  • Evans, C., KandikoHowson, C., & Forsythe, A. (2018). Making sense of learning gain in higher education. Higher Education Pedagogies, 3(1), 1–45.
  • Evans, C., & Waring, M. (2011a). Student teacher assessment feedback preferences: The influence of cognitive styles and gender. Learning and Individual Differences, 21(3), 271–280.
  • Evans, C., & Waring, M. (2011b). Exploring students’ perceptions of feedback in relation to cognitive styles and culture. Research Papers in Education, 26(2), 171–190.
  • Evans, C., Zhu, X., Winstone, N., Balloo, K., Hughes, A., & Bright, C. (2019). Maximising student success through the development of self-regulation (Addressing Barriers to Student Success, Final Report No. L16). Southampton, UK: University of Southampton, Office for Students.
  • Fielding, D. W., & Regehr, G. (2017). A call for an integrated program of assessment. American Journal of Pharmaceutical Education, 81(4), 1–11.
  • Forsythe, A., & Jellicoe, M. (2018). Predicting gainful learning in higher education: A goal-orientation approach. Higher Education Pedagogies, 3(1), 103–117.
  • Forsythe, A., & Johnson, S. (2017). Thanks, but no-thanks for the feedback. Assessment & Evaluation in Higher Education, 42(6), 850–859.
  • Friedlander, M. J., Andrews, L., Armstrong, E. G., Aschenbrenner, C., Kass, J. S., Ogden, P., … Viggiano, T. R. (2011). What can medical education learn from the neurobiology of learning? Academic Medicine: Journal of the Association of Medical Colleges, 86(4), 415–420.
  • Gross, J. J. (2001). Emotion regulation in adulthood: Timing is everything. Current Directions in Psychological Science, 10(6), 214–219.
  • Hadwin, A., Järvelä, S., & Miller, M. (2018). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 83–106). New York, NY: Routledge.
  • Hanstedt, P. (2015). Reconsidering “whole person” education: What do we really want for our students—and how do we get them there? Plenary address presented at the 7th Annual Conference on Higher Education Pedagogy, Blacksburg, Virginia Polytechnic Institute and State University.
  • Harrison, C. J., Könings, K. D., Schuwirth, L. W. T., Wass, V., & van der Vleuten, C. P. M. (2017). Changing the culture of assessment: The dominance of the summative assessment paradigm. BMC Medical Education, 17(1), 73–87.
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
  • Hawe, E., & Dixon, H. (2017). Assessment for learning: A catalyst for student self-regulation. Assessment & Evaluation in Higher Education, 42(8), 1181–1192.
  • Hughes, R. L. (2011). Developing and assessing college student teamwork skills. New Directions for Institutional Research. Wiley online library.
  • Hughes, G., Smith, H., & Creese, B. (2015). Not seeing the wood for the trees: Developing a feedback analysis tool to explore feed forward in modularised programmes. Assessment & Evaluation in Higher Education, 40(8), 1079–1094.
  • Jakhelln, R. (2011). Early career teachers’ emotional experiences and development: A Norwegian case study. Professional Development in Education, 37(2), 275–290.
  • James, D. (2014). Investigating the curriculum through assessment practice in higher education: The value of a “learning cultures” approach. Higher Education, 67(2), 155–169.
  • Kapur, M. (2008). Productive failure. Cognition and Instruction, 26(3), 379–424.
  • Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York, NY: Cambridge University Press.
  • Marton, F., Dall’Alba, G., & Beaty, E. (1993). Conceptions of learning. International Journal of Educational Research, 19, 277–300.
  • McCune, V., & Entwistle, N. (2011). Cultivating the disposition to understand in 21st century university education. Learning and Individual Differences, 21(3), 303–310.
  • McGinnis, D. (2018, December 20). What is the fourth industrial revolution? [Web log post].
  • Montenegro, A. (2017). Understanding the concept of student agentic engagement for learning. Colombian Applied Linguistics Journal, 19(1), 117–128.
  • Nash, R. A., & Winstone, N. E. (2017). Responsibility-sharing in the giving and receiving of assessment feedback. Frontiers in Psychology, 8, 1519.
  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–17.
  • Nicol, D., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.
  • Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122.
  • Orsmond, P., & Merry, S. (2013). The importance of self-assessment in students’ use of tutors’ feedback: A qualitative study of high and non-high achieving biology undergraduates. Assessment & Evaluation in Higher Education, 38(6), 737–753.
  • Orsmond, P., & Merry, S. (2017). Tutors’ assessment practices and students’ situated learning in higher education: Chalk and cheese. Assessment & Evaluation in Higher Education, 42(2), 289–303.
  • Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8, 422.
  • Panadero, E., & Alonso-Tapia, J. (2013). Self-assessment: Theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electronic Journal of Research in Educational Psychology, 11(2), 551–576.
  • Peeters, J., De Backer, F., Kindekens, A., Triquet, K., & Lombaerts, K. (2016). Teacher differences in promoting students’ self-regulated learning: Exploring the role of student characteristics. Learning and Individual Differences, 52(C), 88–96.
  • Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18(4), 315–341.
  • Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.
  • Rogers-Shaw, C., Carr-Chellman, D. J., & Choi, J. (2018). Universal design for learning. Adult Learning, 29(1), 20–32.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144.
  • Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment and Evaluation in Higher Education, 35, 535–550.
  • Sadler, D. R. (2013). Opening up feedback: Teaching learners to see. In S. Merry, M. Price, D. Carless, & M. Taras (Eds.), Reconceptualising feedback in higher education: Developing dialogue with students (pp. 54–63). London, UK: Routledge.
  • Sadler, D. R. (2017). Academic achievement standards and quality assurance. Quality in Higher Education, 23(2), 81–99.
  • Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565–600.
  • Schutz, P. A., Hong, J. Y., Cross, D. I., & Osbon, J. N. (2006). Reflections on investigating emotion in educational activity settings. Educational Psychology Review, 18, 343–360.
  • Scott, D., Hughes, G., Evans, C., Burke, P. J., Walter, C., & Watson, D. (2014). Learning transitions in higher education. London, UK: Palgrave Macmillan.
  • Seifert, T. (2004). Understanding student motivation. Educational Research, 46(2), 137–149.
  • Seufert, T. (2018). The interplay between self-regulation in learning and cognitive load. Educational Research Review, 24, 116–129.
  • Shell, D. F., & Husman, J. (2008). Control, motivation, affect, and strategic self-regulation in the college classroom: A multidimensional phenomenon. Journal of Educational Psychology, 100(2), 443–459.
  • Speirs, N. M., Riley, S. C., & McCabe, G. (2017). Student-led, individually-created courses: Using structured reflection within experiential learning to enable widening participation students’ transitions through and beyond higher education. Journal of Perspectives in Applied Academic Practice, 5(2), 51–57.
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. New York, NY: Springer.
  • Taras, M. (2015). Student self-assessment: What we have learned and what are the challenges. RELIEVE: E-Journal of Educational Research, Assessment and Evaluation, 21(1), 1–14.
  • van der Zanden, P. J. A. C., Denessen, E., Cillessen, A. H. N., & Meijer, P. C. (2018). Patterns of success: First-year student success in multiple domains. Studies in Higher Education, 44(11), 2081–2095.
  • van Heerden, M., Clarence, S., & Bharuthram, S. (2017). What lies beneath: Exploring the deeper purposes of feedback on student writing through considering disciplinary knowledge and knowers. Assessment & Evaluation in Higher Education, 42(6), 967–977.
  • Vermunt, J. D., & Donche, V. (2017). A learning patterns perspective on student learning in higher education: State of the art and moving forward. Educational Psychology Review, 29(2), 269–299.
  • Vermunt, J. D., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning and Instruction, 9(3), 257–280.
  • Waring, M., & Evans, C. (2015). Understanding pedagogy: Developing a critical approach to teaching and learning. London, UK: Routledge.
  • Wegerif, R. (2015). Technology and teaching thinking: Why a dialogic approach is needed for the twenty-first century. In R. Wegerif, L. Li, & J. C. Kaufman (Eds.), The Routledge international handbook of research on teaching thinking (pp. 427–440). London, UK: Routledge.
  • Weick, K. E. (1995). Sensemaking in organizations (1st ed.). Thousand Oaks, CA: SAGE.
  • Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548–573.
  • Wenger, E. (2000). Communities of practice and social learning systems. Organization, 7(2), 225–246.
  • Wenger, E. (2009). A social theory of learning. In K. Illeris (Ed.), Contemporary theories of learning: Learning theorists … in their own words (1st ed., pp. 209–218). London, UK: Routledge.
  • Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37.
  • Yu, S., & Hu, G. (2017). Understanding university students’ peer feedback practices in EFL writing: Insights from a case study. Assessing Writing, 33, 25–35.
  • Zimmerman, B. J., & Schunk, D. H. (Eds.). (2011). Handbook of self-regulation of learning and performance. New York, NY: Routledge.