The ongoing challenge of assessing 'Fuzzy' subjects

Fuzzy

Welcome back Exams?

For the first time in a few years students in the UK have completed a 'normal' exam season and are now eagerly awaiting their grades. We have arrived here with a political consensus that exams are the 'fairest' method of assessment after the enforced uses Centre Assessed Grading and Teacher Assessed Grading. Of course the debate around the use of exams continues - Home - Rethinking Assessment - but for now at least it seems to have been re-established as the default method of assessment.

Fair outcomes?

Even if we accept that traditional examinations are the fairest method of assessment, there remain plenty of concerns regarding the grading process. It's important to note that we are talking about grading here and not marking. In his recent book Missing the Mark, Dennis Sherwood exposes some startling facts. According to Sherwood.

  • 1 in 4 grades are wrong
  • For every 10 students taking 3 A'Levels, only about 4 have a certificate on which all 3 grades are right. And about 6 have at least one grade wrong.
  • For every 10 students taking 8 GCSE's, only about 1 has a certificate on which all grades are right, and about 9 have at least 1 grade wrong.

Sherwood's views are not uncontested of course, and it is fair to say that OFQUAL have a slightly different view.

Why might this be happening?

Much of the challenge centres around assessing 'fuzzy' subjects. These are the ones with high levels of subjectivity, for example English, History, Art and Design. Achieving marking reliability in such cases will always be difficult and is currently addressed in a number of ways with varying levels of success. However, it is where the 'fuzziness' stretches across grade boundaries that we get the issues of 'wrongness'.

What can be done?

Certainly this post is not long enough to unpick the problem in detail, however any solution will need to recognise that assessment needs to be more flexible than is currently the case. Digitising Assessment offers huge potential and is something RM is working on with partners around the world. We are also confident that Adaptive Comparative Judgement will be part of a new assessment landscape that will indeed be 'fairer'.