Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, BUSINESS AND MANAGEMENT (oxfordre.com/business). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

.

 

date: 20 October 2019

Assessment for Learning in Management Education: A Practical Perspective

Summary and Keywords

Good assessment and feedback are essential for high student achievement, retention, and satisfaction in contemporary higher education, and adopting a fit-for-purpose approach that emphasizes assessment for learning can have a significant impact, but it is a complex and highly nuanced process so needs careful and research-informed design principles. Here the crucial importance of assessment in contemporary higher education pedagogy is considered, the key principles of good assessment are reviewed, and some suggestions are made for a framework to effectively interrogate individual practice with a view to continuous improvement.

Additionally, different means of offering feedback can help students to get the measure of their learning and point them toward future enhancement strategies but must be achieved in ways that are manageable for all stakeholders. Taxing questions are provided here for use by curriculum designers and all those who deliver and assess it enabling them to draw together key issues into a workable framework for assessment enhancement.

Keywords: Fit-for-Purpose assessment, assessment for learning, feedback, feedforward, formative and summative assessment, assessment strategies

Introduction

Assessment approaches and methods demonstrably impact on student success (Brown & Race, 2012). The fit-for-purpose model of assessment in higher education developed collaboratively by the author and Peter Knight (Brown & Knight, 1994) proposes that a constructively aligned approach relies on the effective interplay of purposes of assessment, orientation, methodology agency (who should assess?), and timing. To this is added a consideration of other key issues including the importance of inclusivity, required by law in many nations including the U.K., so that all students have equivalent (but not identical) opportunities to succeed. Furthermore, it is important that assessment systems are seen to be fair, since students become demotivated and disengaged if they do not have faith in assessment systems (Flint & Johnson, 2011). It is also necessary to ensure reliability, so that all concerned have confidence that work of an equivalent standard is assessed at the same level; validity, so that what is assessed is seen a good representation of what is outlined in learning outcomes; and authenticity, since it is important to be sure that we are assessing meaningfully rather than through proxy measures (Brown & Race, 2012; Fook & Sidhu, 2010).

The work of both Sambell, McDowell, and Montgomery (2013) and Bloxham and Boyd (2007) in promoting assessment for learning (rather than just assessment of learning) has been highly influential in the two decades since the authors make effective arguments for the case for assessment that is not just concerned with summative judgments but also has a strong formative function, aiming to improve students’ confidence, capabilities, and future achievements, by the judicious use of feedback and “feedforward” (Gibbs & Simpson, 2005). This happens largely through dialogue and interaction between assessors and assessees, including peers, with reference to the work of Nicol and Macfarlane-Dick (2006). Indeed, assessment is increasingly being seen as learning, that is to say the means through which learning happens.

What Is Assessment?

Assessment principally concerns identifying what an individual (or group of students) knows and can do, normally expressed as a mark or grade which counts toward a final outcome in education. Assessment and feedback are arguably the most important loci of student engagement with those who teach them since nowadays they can access information and subject content from myriad sources beyond the formal classroom, including online (through virtual learning resources, e-books, TED-talks, YouTube videos, and so on), through hard-copy text (including textbooks and other books, journal articles, informal published materials), and people (fellow students, workplace colleagues, family and friends, and teachers, tutors, and instructors) beyond their own programs. For this reason, good assessment design is crucial, since it can impact positively or negatively on student engagement, retention, satisfaction, and achievement so we need to take a fresh look at our current ways of working to make sure our assessment works well for students, those who assess them, and the institutions of learning in which this process takes place. It is a high-stakes activity for students, since their whole success as students depends on effective navigation of the assessment labyrinth. As Grace and Gravestock (2008) argue:

Assessment is likely to be the most daunting aspect of [students’] academic experience at university. Problems with managing workloads, poor lectures, having to give seminar papers and so on can all cause anxiety, but the issues relating to assessment often relate to their fears for the ultimate performance and final results and . . . future careers. (p. 177)

It is notable also that, for assessors, assessment can be equally all-consuming, particularly during pinch points of the academic year, and this in turn can lead to high stress levels, exhaustion, and ill-health.

For this reason, this article explores the central issues integral to assessment practice, particularly using the lenses of the Fit-for-Purpose and Assessment-for-Learning models, and reviews how the concomitant central importance of feedback can be put into practice using a checklist of questions to enable individuals and course/program leaders to scrutinize the effectiveness of their current assessment strategies.

Assessment Design: Fit-for-Purpose Assessment

As early as 1994, Brown and Knight (1994) argued for consideration of five key questions that underpin effective assessment design and implementation: Why are we assessing? What is it we are actually assessing? How are we assessing? Who is best placed to assess? When should we assess? This framework became known as the Fit-for-Purpose model and has been much used, expanded, and adapted over time internationally. The proposition was that by systematically and holistically taking into account purposes, orientation, methodologies, agency, and timing, curriculum designers could plan for productive and meaningful assessment, rather than something that is “bolted on” after the rest of curriculum design has been achieved.

Why Are We Assessing?

There are myriad valid reasons why academic staff undertake assessment in higher education, associated with making learning and sense-making happen, particularly though feedback; prompting good decisions about learning pathways; developing graduate capabilities and work-readiness; and contributing to quality assurance and enhancement systems and recognizing the rationale that can direct next steps in assessment design. Purposes for the choices made in selecting a particular format or approach include the following. The curriculum designer/instructor aims inter alia to

  • enable students to get the measure of their achievement, so they are not struggling in a vacuum but can make sense of their own progress against standards of achievement required;

  • help them consolidate their learning rather than just acquiring or memorizing a set of disconnected separate elements of information, which is less useful in business contexts than confident expertise;

  • provide them with opportunities to relate theory and practice through the integration of what they learn in theory classes with the application of that knowledge to live environments in the lab, studio, live learning environment, or workplace;

  • help students make sensible choices about option alternatives and directions for future study. For example, in a languages degree if one is better at public relations than marketing, one might specialize in the former, but in an accountancy degree, if results demonstrate a student lacks proficiency in mathematics, s/he should take more options in this subject as it is essential for success;

  • provide developmental and supportive feedback so they can improve and remedy any deficiencies as early as possible, to correct errors and remediate deficiencies while there is still time to change (some call this “feedforward”);

  • motivate students to engage in their learning, both in terms of providing a framework of activity around which they can plan their study activities and as a means of energizing and exciting them about exploring their chosen subject area through genuinely interesting projects;

  • evidence successful achievement for satisfactory progression to the next stage of study in the confidence that sound foundations have been laid;

  • show through the skills and capabilities they can demonstrate the employability skills students will need on graduation;

  • provide assurance of fitness-to-practice for a wide range of business and management professions, so that the public can be assured of their capability, probity, and underpinning values;

  • give feedback to teachers on our own effectiveness, since large volumes of poor quality work handed in for one of our assignments could mean inter alia inadequate briefing and explanation of tasks, poor teaching of students, insufficient rehearsal and practice of tasks, or indeed poor learning practices by students, who would then need guidance and advice; and

  • (inevitably) provide statistics for our own institutions and for internal and external agencies so they can assure the quality of our teaching provision.

When undertaking curriculum design from the outset and constructively aligning (Biggs & Tang, 2011), it can be invaluable to have real clarity for each assessment element about the purpose of this particular assignment at this particular time with this particular cohort at this particular level with these particular students, so that appropriate methods, approaches, agency, and timing can be chosen.

What Is It We Are Actually Assessing?

The orientation approach is important too, that is, choosing what it is to be assessed: one might, for example, plan to focus on a particular set of core content material that underpins much else within the program (hence the use of a substantial set of Multiple-Choice Questions [MCQs] repeated at intervals in many Problem-Based Learning programs) or instead a focus on the application of theory to specific contexts, in which case a demonstration, case study, or presentation might be more appropriate. Where the marriage of both is required, integration of theory and practice in the form of an e-portfolio or an extended formal report might work best. Similarly, it is important to consider on occasions whether it is the product or output that is central to final assessment, or whether it is the process by which the product has been achieved, as is often the case in architecture, for example. In any case, it is important not to fall back on choosing to assess what we have always assessed or indeed to fall back on assessing what is easy to assess: too often we resort to assessment methods we have used over the years rather than making a rational informed choice about what would work best The next section explores the importance of both formative and summative assessment within assessment strategies.

Formative and Summative Assessment

Formative and summative assessment are both necessary to the student experience but work in different ways (Brown, 2015, p. 128; Higher Education Academy [HEA], 2012, p. 20). Inevitably higher education institutions; professional, subject, and regulatory bodies; and national agencies need confidence that performance and capability are being tested to acceptable standards, hence the need for summative assessment to demonstrate, for example, fitness-to-practice, but formative assessment is the component of the process that has the potential to enhance assessment outcomes. Formative assessment can be described as being designed to form, shape, and transform students’ behavior to help them plan future activities and to improve performance: it is often incremental or continuous and usually involves words rather than just numbers (Sambell, Brown, & Graham, 2017, p. 141). Summative assessment is designed to sum up, evaluate, or judge students’ performance against criteria in a way that is accessible beyond the program; is often the end point; and usually involves marks and grades of some kind. Of course, many assignments have both formative and summative functions, and since “you can’t fatten pigs by weighing them” (Brown & Knight, 1994), at the design stage it is always valuable to consider which purpose is predominating on any given occasion.

How Are We Assessing?

Methodologies: Globally, hundreds of different methods of assessing students in higher education are extant, but in most nations current research on international assessment practices by the author suggests that four methods predominate: essays; unseen, time-constrained written exams; reports; and MCQs. But to restrict ourselves to those alone is to miss out on a huge variety of authentic and intellectually challenging methods. Brown and Race (2012) have considered the advantages, disadvantages, and ready applicability of 21 methods including not only traditional ones but also methods highly suitable for business and management, including open-book exams, take-away papers, short-answer questions, sophisticated computer-based assessments using a wide range of text-based and visual question types, portfolios, viva voce oral tests and interviews, posters, projects, simulations, Objective Structured Clinical Examinations, assessed seminars, reflective journals, critical incident accounts, annotated bibliographies, in-tray exercises, and artifacts. We concluded that including diverse assessment methods within a program can truly benefit students, when students are well-briefed, offered opportunities for risk-free rehearsal, and not presented with too many different types all at once. Some assessment methods are best suited to summative assessment, such as unseen time-constrained exams; some lend themselves particularly well to formative assessment, such as the submission of sample “patches” for commentary; and some inevitably involve both, such as traditional studio critiques sessions. Novelty for the sake of novelty alone is to be avoided, but creative inclusion of fit-for-the-purpose-in-hand methods can foster student engagement and make learning meaningful.

Who Is Best Placed to Assess?

When thinking about assessment agency, the fall-back position in most nations is the professor, tutor, or their (often subordinate) delegate, not necessarily the person who is best placed to assess. In cases where high levels of experience, expertise, and judgment are required, then inevitably highly qualified assessors are necessary, but in many cases other assessment agents can be productively involved. Where students have been working alongside one another in the production of an artifact, a poster, or a presentation, and where assessment criteria have been widely discussed and even negotiated, fellow-students can be very well-placed to assess their peers either individually or working in small groups (interpeer group assessment). In tasks where students have been working together collegially to produce a joint outcome such as a client presentation, often the people who can truly judge the performance of their peers as group members much better than external scrutineers are the other students within the same group (intrapeer group assessment). In supporting students to develop the capacity to make meaningful and competent judgments about the performance of their peers against agreed standards, they themselves become more effective meta-learners better capable of judging the quality of their own outputs, as Sadler (1989) argues:

The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher, is able to monitor continuously the quality of what is being produced during the act of production itself and has a repertoire of alternative moves or strategies from which to draw at any given point. In other words, students have to be able to judge the quality of what they are producing and be able to regulate what they are doing during the doing of it.

(Sadler, 1989; author's italics)

Once they have achieved this skill, they then become more capable of effective self-assessment, a lifelong learning skill invaluable in work and life environments where regular review of performance is less common than in the years dedicated to study.

Self-assessment in the form of reflective commentaries can become part of a structured assessment process (Boud, 1995; Boud & Falchikov, 2007) in contexts, for example, like students working on placement/internship contexts, where the higher education institution staff are not physically present and thereby rely on assessment by externals, employers, or practice managers or on effective self-review. It is often helpful to provide frameworks for summative self-reflection, perhaps in the form of a Critical Incident Account, where students keep an informal journal or diary of day-to-day events but draw upon it to produce a word-restricted summary more readily accessible and assessable by tutors under headings such as “Context,” “What I did in practice,” “My rationale,” “Scholarly literature on which I drew,” “Outcomes and learning points,” and “What I would have done differently on another occasion.”

A further agency for assessment in such situations can be the client with whom students are working in practice: for example, if a trainee student is working in a law clinic offering advice to a member of the general public under the supervision of a trained solicitor, or a nurse is practicing safe lifting of an elderly patient, their competence and expertise can be judged by a tutor, a peer, or a supervisor, but it can also be useful to ask the client him- or herself: “Did you feel you had your history taken respectfully with plenty of chances for you to say all you needed?” or “Did you feel comfortable and properly treated?” A Fit-for-Purpose model requires careful thought about who is best placed to do the job. If external agencies including employers and clients are to be involved in summative assessment, there will necessarily be implications in terms of training and moderation to ensure interassessor reliability.

When Should We Assess?

It is important when considering timing of assessment that this is similarly part of a well-thought-through process that considers suitability for learning as well as convenience and manageability. Avoiding excessive constrictions imposed by institutional systems, it is crucial to consider not only programmatic considerations but also readiness for assessment. Yorke (1999) laid the foundations for thinking in this area, and much subsequent work emphasizes the importance of providing early formative assignments within the first six weeks of the first semester of the first year so that students are not floundering and confused about how they are doing. Students need timely feedback so they can make changes while there is still time to receive, note, and act on suggestions for improvement. Summative assessment is inevitably the end point, but it is important to avoid leaving it all to the end, for example with a lengthy project or dissertation, where incremental submissions of, say, plans, sample chapters, and bibliographies can gain reassuring early advice and guidance. Where students can manifestly demonstrate readiness to be assessed earlier than curriculum schedules would suggest, it may be worth exploring flexibility of submission with additional add-on optional extension tasks for students who achieve required outcomes fast. Students perform best when their workload is paced and steady rather than highly concentrated in short assessment periods, so it is also worth considering manageability for both assessors and assessees, hence making sure neither staff nor students are subject to the excessive stress caused by logjams of several pieces of work being due at the same time.

Further Key Factors: What Else Should Be Taken Into Account?

In addition to these five factors, national agencies including the U.K. Quality Assurance Agency (2013) require us to take into account further aspects of assessment practice when planning and delivering assessment including the following.

Inclusivity

Underlying the model of Assessment for Learning in higher education is that all assessment should contribute to helping students demonstrate their true capability and it’s worth noting that diversity can relate to many characteristics and dimensions, including educational (e.g., prior learning experiences, previous qualifications); dispositional (e.g., attitudes, preferences); circumstantial (e.g., family or caring responsibilities, in employment); and cultural (e.g., values, religion and belief) (Thomas & May, 2010). Hence inclusive assessment implies designing assessment tasks that are “meaningful, relevant and accessible to all” (Thomas & May, 2010, p. 9), recognizing that students as individuals will have diverse strengths, qualities, and skills.

“Inclusive assessment refers to the design and use of fair and effective assessment methods and practices that enable all students to demonstrate what they know, understand and can do” (Thomas & May, 2010, p. 13). In the U.K. as in many other nations, gubernatorial imperatives require that we strive to make sure that all students achieve to their best abilities, so reasonable adjustments, for example to timing or format, may need to be made for students with disabilities or impairments (for an overview of this extensive topic, see, e.g., Waterfield & West, 2010; Waterfield, West, & Parker, 2006). As Grace and Gravestock (2008) argue, while equity in practice implies equivalence rather than identicality, above all students want to perceive that the system is fair: “We need to anticipate any potential inequities, not only because intrinsically that is for most of us the right thing to do but because by law [in the U.K. and some other countries] we must” (p. 187).

Fairness

If students feel that assessment is detached from learning—that it is almost an afterthought in the curriculum design process—it can be alienating and contra-incentivizing. Students expect programs of study to be fairly aligned, so that the skills, capabilities, and knowledge that are detailed in related documentation as the outcomes they can expect to have achieved by the end of their studies are demonstrably integrated within their assignments, and it is likely that all stakeholders will recognize the appropriateness of this approach. Students tend to be unimpressed by a diet of assignments that only require demonstration of a tiny subset of what they have been asked to learn, seeing this as unfair (Sambell, McDowell, & Brown, 1997). Both employers and thoughtful curriculum designers similarly expect to see specified outcomes clearly reflected in the tasks assigned to students, since otherwise risky behaviors like gambling on “hot topics” coming up in exams is seemingly encouraged. Mostly, it seems likely that effective curriculum designers would prefer our students to work assiduously and systematically through curriculum materials rather than adopting a scattergun approach.

Students also get upset, and are likely to behave badly, if they think others are cheating and/or plagiarizing according to Williams and Carroll (2009). They expect university staff to be rigorously fair in preventing, detecting, and punishing cheats and those who seek unfair advantage by getting help from tutors that is not available to others. The best remedy available to curriculum designers where confusion exists about what is and what is not permissible for this is to have open, transparent, and readily interrogable systems, which are clearly described from the outset for students who may have uncertainties about what is or is not permitted, and to have systems in place to apprehend those who do not play fair. A key deterrent is to design assignments that are difficult to plagiarize or copy and which therefore have a personalized element together with incremental review virtually or face-to-face to deter off-the-shelf purchases of assignments.

In earlier years where bias against certain racial, gender, or other groups was rife, anonymous marking was seen as a panacea to perceived bias by markers, but this is being questioned more recently for example by Pitt and Winstone (2018), who found no evidence that nonanonymous marking has any deleterious effect on students’ performance (although this was not always recognized by students). They argue that anonymous marking, while potentially beneficial, is not the only means to balance the tensions between perceptions of fairness and lack of bias with a desire to enhance students’ belief in how beneficial feedback can be, so they suggest that feedbackshould not be given anonymously:

Making assessment processes transparent to students through continued dialogue, maintaining trust in the professionalism of academics, and promoting feedback as an ongoing process of dialogue can maintain the integrity of assessment processes without sacrificing the potential impact of feedback on students’ learning and development.

(Pitt & Winstone, 2018, pp. 9–10)

Students also balk at any suggestion that they are being treated unjustly in comparison to other students, which is why explicitly assuring reliability and validity is so important.

Reliability

Reliability concerns how well different assessors would agree on the mark or grade awarded for a particular piece of a student’s work. Regarding interassessor reliability, if several assessors mark the same piece of work and all agree (within reasonable error limits) about the grade or mark, we can claim we are being reliable (Race, 2014). Within the U.K. HEA “Marked Improvement” project, which directly impacted on 10 U.K. universities and indirectly impacted on many more who used its assessment review tool to address strategic assessment issues (HEA, 2012), the project team established as a core tenet the principle that “Assessment standards are socially constructed so there must be a greater emphasis on assessment and feedback processes that actively engage both staff and students in dialogue about standards” (HEA, 2012, p. 21). It is only through collegially building shared understanding within subject and other communities, it was argued, that judgments by groups of assessors can be made reliably.

Intra-assessor reliability is a further concern, involving how well an individual assessor maintains consistency of standards over time, which can be very challenging when huge volumes of marking are being done within compressed periods of time, leading to lapses of judgment and internal inconsistency. The only way to assure intra-assessor reliability is for each marker to pace marking sensibly and refer regularly to the assessment brief, the criteria, and any available marking rubrics and also periodically revisit scripts marked earlier to check that standards have not drifted. University managers also have a responsibility to exercise sensible deployment processes so that individual staff are not overburdened and given excessive time constraints for marking.

In some nations, including the U.K., strenuous efforts at moderation to assure standards at an institutional, national, and subject level are undertaken, particularly through the processes of moderation (where work is sampled and scrutinized by internals and externals) and external examining, whereby externals to the institution are involved from curriculum planning stage through to final Exam Boards to assure standards of work are comparable and that assessment is being undertaken equitably (see, e.g., Akerman & Cardew, 2009, pp. 177–192).

Validity

According to Race (2015), this can be considered as the extent to which the methods we use allow us to measure evidence of achievement of the intended learning outcomes as specified for our students.

Valid assessment is about measuring that which we should be trying to measure. But still too often, we don’t succeed in this intention. We measure what we can. We measure echoes of what we’re trying to measure. We measure ghosts of the manifestation of the achievement of learning outcomes by learners. Whenever we’re just ending up measuring what they write about what they remember about what they once thought (or what we once said to them in our classes), we’re measuring ghosts. Now, if we were measuring what they could now do with what they’d processed from what they thought, it would be better.

Authenticity

Ensuring that assessment focuses efforts and promotes engagement through providing realistic assessment methods rather than proxies is the basis of authentic assessment. If we want to assess a unicyclist, we could ask for a history of the unicycle, a list of a set of parts for the unicycle, a set of instructions on how to mount a unicycle, or, better yet, a practical demonstration of unicycling. Good assessment focuses on what students can do, rather than what they cannot do, and thus entails fostering attitudes and dispositions as well as skills and knowledge, as Sambell et al. (2013) argue: “Authentic assessment might usefully be viewed as a matter offering students opportunities which clearly signal to them the value and importance of developing as learners” (p. 14).

Students undertaking authentic assessments tend to be more fully engaged in learning and hence tend to achieve more highly because they see the sense of what they are doing. University teachers adopting authentic approaches can use realistic and live contexts within which to frame assessment tasks, which help to make theoretical elements of the course come to life. Employers value students who can quickly engage in real-life tasks immediately on employment, having practiced and developed relevant skills and competences through their assignments. A useful lens through which to view authenticity would be to ask what kinds of questions your students might encounter in interviews for work, promotion, or other contexts where they would be required to answer such questions as:

Can you tell us about an occasion when you:

  • worked together with colleagues in a group to produce a collective outcome?

  • had to work autonomously with incomplete information and self-derived data sources?

  • developed strategies to solve real-life problems and tested them out?

  • had a leadership role in a team? And what were your strategies to influence and persuade your colleagues to achieve a collective task?

  • had to communicate outcomes from your project work orally, in writing, through social media, and/or through a visual medium?

These kinds of questions point to applied and authentic learning that will be valued both by students and those they encounter in professional environments and are firmly embedded in assessment for learning approaches, as discussed in the next section.

Assessment for Learning

Drawing particularly on the seminal work of Sambell et al. (2013), who led the eponymous Centre of Excellence on Teaching and Learning, a central premise widely adopted internationally is that assessment is such a powerful vehicle for learning that opportunities are wasted if it is not fully integrated in the process. They propose a cyclical model of Assessment for Learning (A4L) that:

  • Emphasizes authentic and complex assessment tasks;

  • Uses “high-stakes” summative assessment rigorously but sparingly;

  • Offers extensive “low-stakes” confidence-building opportunities and practice;

  • Is rich in formal feedback (e.g., tutor comment, self-review logs);

  • Is rich in informal feedback (e.g., peer review of draft writing, collaborative project work); and

  • Develops students’ abilities to evaluate their own progress and direct their own learning.

This contrasts substantially with models adopted in many traditional assessment environments where the tasks are often designed for simplicity of administration rather than validity or authenticity. The Assessment for Learning approach would have us review to what extent we overuse summative assessment and do not make enough use of formal and informal formative feedback to help make students capable of gauging how good their work is rather than waiting for their teachers’ judgments.

Sadler (2010) argues the case for using exemplars to help students to better understand the assessment process:

Students need to be exposed to, and gain experience in making judgements about, a variety of works of different quality. . . . They need planned rather than random exposure to exemplars, and experience in making judgements about quality. They need to create verbalised rationales and accounts of how various works could have been done better. Finally, they need to engage in evaluative conversations with teachers and other students. Together, these three provide the means by which students can develop a concept of quality that is similar in essence to that which the teacher possesses, and in particular to understand what makes for high quality. (p. 544)

Bloxham and Boyd (2007) further encourage us to think how we can make our assessment challenging, demanding higher-order learning and integration of knowledge learned in both the university and other contexts. They suggest it should encourage metacognition, promote thinking about the learning process not just the learning outcomes, and engage students in self-assessment and reflection on their learning. They propose it should provide “feedforward” for future learning and that there should be opportunities and a safe context in which students can expose problems with their study and get help.

Assessment expectations, they say, should be made visible to students as far as possible and tasks should involve the active engagement of students developing the capacity to find things out for themselves and learn independently. Such tasks should be fit-for-purpose, worthwhile, and relevant and offer students some level of control over their work; it should also be possible for assessment to be useful in evaluating our own teaching as well as students’ learning.

Carless (2013) argues for “active student participation in dialogic activities in which students generate and use feedback from peers, self or others as part of developing capacities as autonomous self-regulating learners” (p. 113) in which students can discuss their work, rather than traditional monologic formats where students are passive recipients of assessors’ comments.

Assessment for Learning can be a vehicle for conceptual change, for those who undertake the assessment too, according to Reimann (2018), who describes how new lecturers encountering the concept on a postgraduate certificate in learning and teaching in higher education who, when interviewed, indicated it had changed their understanding of assessment. For example, one said:

“The whole assessment for learning idea made me think about assessment in a way that I haven’t before, so using assessment as a tool for learning rather than just a means of students achieving a mark. It was like a lightbulb suddenly switched on.” (p. 91)

Crucial to the concept of assessment for learning is the centrality of feedback as a locus of transformation, and the next section discusses how this can foster learning.

Feedback

If assessment is the engine that drives the wheels of learning, feedback is the lubricant that keeps the engine running smoothly, but students expressing dissatisfaction about assessment often focus particularly on aspects of late, desultory, undermining, or inadequate feedback.

There is wide consensus within the education literature that “quality” feedback should serve three overlapping functions: (1) an “orientational” purpose that clarifies the student’s performance and achievement; (2) a “transformational” purpose that enables the student to reflect, improve their performance, and become more autonomous (often termed “feed-forward”); and (3) an “affective/interpersonal” dimension that gives the student confidence and motivation, and builds a strong teacher-student relationship.

(Dunworth & Sanchez, 2016, as quoted in Winstone & Nash, 2016)

Good feedback enables students to be really clear about goals, criteria, and expected standards and provides them with opportunities to close the gap between current and desired performance by giving them the chance to respond to comments from their markers and seek clarification where necessary, while actively facilitating self-review and reflection, so that they become good judges of the quality of their own work (after Nicol & Marfarlane-Dick, 2006).

Feedback that works well does not just correct errors and indicate problems, potentially leaving students discouraged and demotivated, but also highlights good work and encourages them to believe they can improve and succeed. It delivers high-quality information to students about their achievements to date and how they can improve their future work. Where there are errors, students can be pointed toward seeing what needs to be done to remediate them, and where they are undershooting in terms of achievement, they should be able to perceive how to make their work even better. Feedback needs to have an orientation toward improving performance at the next stage. Hounsell, McCune, Hounsell, and Litjens (2008) argue that it should

increase the value of feedback to the students by focusing comments not only on the past and present . . . but also on the future—what the student might aim to do, or do differently in the next assignment or assessment if they are to continue to do well or to do better. (p. 5)

Feedback needs to be prompt if it is to have impact: the most detailed and thoughtful advice and comments may be largely ignored if they arrive so late as to have no relevance to the student’s current workload. Winstone and Nash (2016) assert the importance of going beyond the delivery of feedback to foster active engagement with it, since many fail to either read it or act on the recommendations made in it without prompting by their tutors. This might be because it is too late or because they fail to see how it applies to them or is usable by them.

Many universities nowadays have a policy of returning work with comments to students within 15 working days of submission, which is sufficient if it arrives back before the student has completed the next assignment in the series but is not valuable if the student has moved on to other things. There is a trade-off between markers being able to assess promptly and effectively, and this can necessitate (with large classes) offering streamlined approaches to feedback, which can include

  • Giving an overall report on cohort performance with examples of common mistakes, illustrations of best practice, and other generic feedback. This can be done orally live at the beginning of a class, as a text distributed to students before their marked work is returned, or as part of a virtual conversation. Whatever way it is used, it must foster dialogue between assessors and assesses and help to foster the assessment literacy of students.

  • Using model answers (Brown, 2015) or exemplars (Handley & Williams, 2011) that can show students what they are aiming for, since it is usually easier to show students than tell them what is required, so that complex terminology drawing on the specific discourse in the field is brought to life by real examples of, for example, critical analysis of a business context. Best examples of these do not provide a single excellent example to emulate but examples of components of several, of different quality, so students can come to grips with the features that differentiate adequate from outstanding work.

  • Developing (potentially with other assessors) a range of regularly used comments in a statement bank, making use of an often-substantial internal repertoire of language which can then be drawn upon to provide continuous text to append to student work or to use with proprietary software for assignment handling. Crucially these comments need to make constructive (and potentially lengthy) suggestions for improvement, for example, specific readings or direction toward specialist data bases, so streamlined formats can be very helpful.

  • Designing assignment return proformas in tabulated formats, usually providing the criteria presented in assignment brief alongside boxes for comments and evaluation: weighting can be clearly identified, and Likert scales can speed tutor’s responses.

  • Harnessing the power of computer-assisted assessment (CAA), both straightforward MCQs and more advanced formats using diverse question types and advanced business simulations which can emulate some more time-consuming hand-marked formats. As well as having summative capability, the formative feedback capacity of CAA is immensely powerful.

Above all, feedback needs to be respectful of the individual. Experience shows that students dislike feedback that involves vague comments which give few hints on how to improve or remediate errors, such as “OK as far as it goes,” “needs greater depth of argument,” “inappropriate methodology used,” “not written at the right level,” or that arrives so late that there are no opportunities to put into practice any guidance suggested in time for the submission of the next assignment. They also object to poorly written comments that are nigh or impossible to decode, especially when impenetrable acronyms or abbreviations are used, or where handwriting is in an unfamiliar alphabet and is illegible and cursory and derogatory remarks that leave them feeling demoralized: “Weak argument,” “shoddy work,” “hopeless,” “underdeveloped,” “you will never make a manager,” and so on. Particularly harmful are value judgements that reflect on them as people rather than on the work being marked, which especially damage those, often from disadvantaged backgrounds, that Dweck (2013) identify as having a fixed rather than malleable entity view of their own intelligence and therefore feel powerless to improve their performance.

Having reviewed the justification for adopting a Fit-for-Purpose approach to assessment, aligning with assessment for Learning, and implementing these through effective feedback, the final section explores how these can be enacted within the curriculum to good effect.

Putting This Into Practice

In management education and learning, the implication of this is that we may need to review rigorously the methods and approaches commonly used to assess student work on our programs to make sure that it fully contributes to learning, rather than detracting from it by encouraging students to adopt short-term coping strategies to pass. The following set of questions may be helpful for individuals and course/program teams wishing to undertake a “deep-dive” review of the effectiveness of local approaches as part of such a review process:

  1. 1. Are your assignments fully and constructively aligned with your learning outcomes, or do you have substantial elements of curriculum content that students can skip without fear of negative consequences?

  2. 2. Do you have the balance right between the summative assessment activities you need to convince employers and other stakeholders that your students are capable and employable and formative activities that can help students progressively improve?

  3. 3. Are summative assessments undertaken at intervals throughout the course, or is everything “sudden death” end-point? Do you give students sufficient rehearsal and discussion opportunities where they can ask questions and seek advice before summative assessment is undertaken?

  4. 4. Are there plenty of opportunities for formative assessment, especially early on in the program when patterns of behavior are being established?

  5. 5. Are students overassessed so that they feel they are constantly on a treadmill with no time to enjoy learning? Or are they underassessed so that they do not receive cues about how much work they should be doing and know if they are achieving sufficiently high standards? When you have introduced innovative assignments, have they been introduced instead of existing ones or simply added to the assessment diet?

  6. 6. Are you using the kinds of assignments and other assessment activities that contribute to learning? For example, if your professional, statutory, regulatory bodies require you to use unseen, time-constrained exams, are you setting questions that encourage overreliance on memorization and regurgitation, or are your questions designed to provide opportunities for students to demonstrate that they can apply and use knowledge in realistic scenarios that test the kinds of skills they will need in real life? Are you overusing traditional case study formats? When you ask students to make presentations, do you specifically assess the kinds of skills in presentations that employers value?

  7. 7. Are students encouraged to make good use of the feedback they receive? And does it arrive in a timely fashion so that students have opportunities to use it to improve future work they are planning to submit?

  8. 8. Do students perceive your assessment diet to be fair and as providing meaningful recognition of their achievements?

  9. 9. Are your assessment systems manageable? Do assessors have time to mark the assessments in the time available to them without rushing in order to fit in with collation and internal/external scrutiny requirements? Is there excessive bunching of assignments in different modules that is highly stressful for students and unmanageable for staff?

Conclusions

Assessment strategies are often underdesigned, so in reviewing effective implementation of effective systems, we need to consider the fitness for purpose of each element of the assessment program including the assignment questions/tasks themselves, the briefings, the marking criteria, the moderation process, and the feedback to make sure they all follow sound pedagogic principles and are manageable for the assessors using them, the students undertaking them, and the scrutineers reviewing them. If we do this, assessment can genuinely contribute to improving student learning, thereby making a marked improvement.

Discussion of the Literature and Primary Sources

As higher education assessment pedagogy globally has experienced a Copernican shift from a teacher-centered approach, typically described in the form of reading lists or syllabi, outlining what is to be taught, to a student-centered view, normally framed in terms of intended learning outcomes identifying what each student is expected to know or do by the end of a program, so also has assessment changed radically in the last 40 years. Tertiary-level assessment is seen nowadays as a key element of learning, with dialogic feedback integral to helping students maximize their engagement and achievement. This is normally described as the Assessment-for-Learning movement, developed almost half a century ago from concepts expressed within school-level education and today encompassing Fit-for-Purpose and inclusive approaches which are integral to making learning happen.

Further Reading

Assessment Strategic Design

Boud, D. (2007). Reframing assessment as if learning were important. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: Learning for the longer term (pp. 14–25). London, U.K.: Routledge.Find this resource:

Boud, D., & Associates. (2010). Assessment 2020: Seven propositions for assessment reform in higher education. Sydney: Australian Learning and Teaching Council.Find this resource:

Bryan, C., & Clegg, K. (Eds.). (in press). Innovative assessment in higher education. London, U.K.: Routledge.Find this resource:

Sadler, D. R. (2010). Assessment in higher education. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopaedia of education (Vol. 3, pp. 249–255). Oxford, U.K.: Elsevier.Find this resource:

Sáiz, M. S. I., & Gómez, G. R. (2010). Aproximación al discurso dominante sobre la evaluación del aprendizaje en la universidad. Revista de educación, 351, 385–407.Find this resource:

Inclusivity/Diversity

Brown, S., & Joughin, G. (2007). Assessing international students: Helping clarify puzzling processes. In S. Brown & E. Jones (Eds.), Internationalising higher education (pp. 57–71). London, U.K.: Routledge.Find this resource:

Pickford, R. (2009). Designing first-year assessment and feedback: A guide to university staff. Leeds, U.K.: Leeds Metropolitan University Press.Find this resource:

Assessment for Learning

Carless, D., Joughin, G., & Ngar-Fun, L. (2006). How assessment supports learning: Learning orientated assessment in action. Hong Kong: Hong Kong University Press.Find this resource:

Falchikov, N. (2004). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. Abingdon, U.K.: Routledge.Find this resource:

Sambell, K. (2013). Engaging students through assessment. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education. Bingley, U.K.: Emerald.Find this resource:

Sambell, K., Brown, S., & Graham, L. (2017). Engaging students with positive learning experiences through assessment and feedback. In Professionalism in practice: Key directions in higher education learning, teaching and assessment (pp. 139–188). London, U.K.; New York, NY: Palgrave Macmillan.Find this resource:

References

Akerman, K., & Cardew, P. (2009). Scrutinising quality: Working with external examiners and others to maintain standards. In S. Denton & S. Brown (Eds.), A practical guide to university and college management: Beyond bureaucracy (pp. 177–192). London, U.K.: Routledge.Find this resource:

Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Maidenhead, U.K.: Open University Press.Find this resource:

Bloxham, S., & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide. Maidenhead, U.K.: Open University Press.Find this resource:

Boud, D. (1995). Enhancing learning through self-assessment. Abingdon, U.K.: Routledge.Find this resource:

Boud, D. (Ed.). (1988). Developing student autonomy in learning (2nd ed.). London, U.K.: Kogan Page.Find this resource:

Boud, D., & Falchikov, N. (Eds.). (2007). Rethinking assessment in higher education: Learning for the longer term. London, U.K.: Routledge.Find this resource:

Brown, S. (2015). Learning, teaching and assessment in higher education: Global perspectives. London, U.K.: Palgrave Macmillan.Find this resource:

Brown, S., & Knight, P. (1994). Assessing learners in higher education. London, U.K.: Kogan Page.Find this resource:

Brown, S., & Race, P. (2012). Using effective assessment to promote learning. In L. Hunt & D. Chalmers (Eds.), University teaching in focus: A learning-centred approach (pp. 74–91). Abingdon, U.K.: Routledge.Find this resource:

Carless, D. (2013). Sustainable feedback and the development of student self-evaluative capacities. In S. Merry, M. Price, D. Carless, & M. Taras (Eds.), Reconceptualising feedback in higher education: Developing dialogue with students. London, U.K.: Routledge.Find this resource:

Carroll, J. (2002). A handbook for deterring plagiarism in higher education. Oxford, U.K.: Oxford Centre for Staff and Learning Development.Find this resource:

Dunworth, K., & Sanchez, H. S. (2016). Perceptions of quality in staff-student written feedback in higher education: A case study. Teaching in Higher Education, 21(5), 576–589.Find this resource:

Dweck, C. S. (2013). Self-theories: Their role in motivation, personality, and development. New York, NY: Psychology Press.Find this resource:

Fook, C. Y., & Sidhu, G. K. (2010). Authentic assessment and pedagogical strategies in higher education. Journal of Social Sciences, 6(2), 153–161.Find this resource:

Flint, N. R., & Johnson, B. (2011). Towards fairer university assessment—recognising the concerns of students. London, U.K.: Routledge.Find this resource:

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3–31.Find this resource:

Grace, S., & Gravestock, P. (2008). Inclusion and diversity: Meeting the needs of all students. Abingdon, U.K.: Routledge.Find this resource:

Handley, K., & Williams, L. (2011). From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education, 36(1), 95–108.Find this resource:

Higher Education Academy. (2012). A marked improvement: Transforming assessment in higher education. York, U.K.: Author.Find this resource:

Hounsell, D., McCune, V., Hounsell, J., & Litjens, J. (2008). The quality of guidance and feedback to students. Higher Education Research & Development, 27(1), 55–67.Find this resource:

Morgan, H., & Houghton, A. (2011). Inclusive curriculum design in higher education. York, U.K.: Higher Education Academy.Find this resource:

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.Find this resource:

Pitt, E., & Winstone, N. (2018). The impact of anonymous marking on students’ perceptions of fairness, feedback and relationships with lecturers. Assessment & Evaluation in Higher Education, 43(7), 1183–1193.Find this resource:

Price, M., Rust, C., O’Donovan, B., Handley, K., with Bryant, R. (2012). Assessment literacy: The foundation for improving student learning. Oxford, U.K.: Oxford Centre for Staff and Learning Development.Find this resource:

Quality Assurance Agency. (2013). U.K. Quality Code for Higher Education: Part B: Ensuring and enhancing academic quality. Chapter B6: Assessment of students and recognition of prior learning. Gloucester, U.K.: Author.Find this resource:

Race, P. (2014). Making learning happen (3rd ed.). London, U.K.: SAGE.Find this resource:

Race, P. (2015). The lecturer’s toolkit (4th ed.). Abingdon, U.K.: Routledge.Find this resource:

Reimann, N. (2018). Learning about assessment: The impact of two courses for higher education staff. International Journal for Academic Development, 23(2), 86–97.Find this resource:

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144.Find this resource:

Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.Find this resource:

Sambell, K., McDowell, L., & Brown, S. (1997). “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4), 349–371.Find this resource:

Sambell, K., McDowell, L., & Montgomery, C. (2013). Assessment for learning in higher education. Abingdon, U.K.: Routledge.Find this resource:

Thomas, L., & May, H. (2010). Inclusive learning and teaching in higher education. York, U.K.: Higher Education Academy.Find this resource:

Waterfield, J., & West, B. (2010). Inclusive assessment: Diversity and inclusion—the assessment challenge. Programme Assessment Strategies (PASS). Bradford, U.K.: University of Bradford.Find this resource:

Waterfield, J., West, R., & Parker, M. (2006). Supporting inclusive practice: Developing an assessment toolkit. In M. Adams & S. Brown (Eds.), Towards inclusive learning in higher education: Developing curricula for disabled students (pp. 79–94). London, U.K.: Routledge.Find this resource:

Williams, K., & Carroll, J., (2009). Understanding referencing and plagiarism.Find this resource:

Winstone, N., & Nash, R. (2016). The Developing Engagement with Feedback Toolkit. York, U.K.: Higher Education Academy.Find this resource:

Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). “It’d be useful, but I wouldn’t use it”: Barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 42(11), 2026–2041.Find this resource:

Yorke, M. (1999). Leaving early: Undergraduate non-completion in higher education. Abingdon, U.K.: Routledge.Find this resource: