Blog
Posts for category: Research
-
What the latest ACJ research means for real‑world assessment
A new study has given Adaptive Comparative Judgement (ACJ) one of its toughest tests yet: using it to assess long, complex law essays in a real university context. The results are encouraging for anyone interested in more reliable, fair and meaningful assessment – and they also highlight some very practical design questions we, as a community, need to solve together.
-
From Product to Process: How VFWA and RM Compare might reclaim academic integrity in the age of AI
Generative AI has broken one of higher education’s quiet assumptions: that a polished essay is a reliable proxy for student thinking. When tools can generate fluent academic prose on demand, we can no longer treat the final product as straightforward evidence of cognitive effort or authorship. The question for universities is no longer, “How can we prove this text wasn’t written by AI?” but “How can we design assessments where AI cannot replace the student’s contribution, only support it?”
-
When the Candidates Pass – But the Exam Doesn’t. How to rescue qualifications in the Age of AI
A City of London awarding organisation recently found itself in a strange position. Year after year, candidates were passing a respected Level 6/7 qualification. The statistics looked healthy, the quality assurance paperwork was in order and the exam board could point to detailed rubrics and grade descriptors. Yet employers were telling a different story.
-
Escaping the Text Trap: Why the Future of Assessment is Spatial
If you look at the current headlines in education, you’d be forgiven for thinking human intelligence is made entirely of words. We are currently locked in an arms race regarding "Text Intelligence." We worry about Large Language Models (LLMs) writing essays for students, and we counter by building AI tools to grade those essays. We are obsessed with the Linguistic Bottleneck - the idea that the only way to prove you understand the world is to write a description of it. But what if we are assessing the wrong intelligence entirely? What if the future of assessment isn't about better ways to read text, but better ways to see action?
-
When Written Applications No Longer Signal Ability: "This is a crisis" (2/3)
The rise of generative AI has disrupted far more than how we write - it has fundamentally broken a mechanism that labour markets, universities, and employers have relied on for decades to identify talent. Groundbreaking new research from Princeton and Dartmouth reveals just how profound this disruption is, and why solutions like RM Compare's Adaptive Comparative Judgement are now essential.
-
Assessment for Collective Intelligence: How RM Compare Prepares Learners for Tomorrow's Challenges
What if the most important thing we assess in schools isn't what students know individually, but how well they can think, learn, and solve problems together? In October 2025, UNESCO published groundbreaking research by Dr Imogen Casebourne and Professor Rupert Wegerif on "AI and Education for collective intelligence: A futures perspective".
-
How to enjoy fries on the beach, undisturbed by seagulls: Surprising Truths from a recent ACJ Study
A recent study (2025) from Jeffrey Buckley (Technological University of the Shannon) and Caiwei Zhu (Delft University of Technology) set out to answer a critical question for anyone seeking fairness and efficiency in educational assessment: How feasible is Adaptive Comparative Judgement (ACJ) when deployed in real classrooms?
-
Turning Research into Action: The OODA Loop at the Heart of RM Compare’s Innovation
Innovation isn’t a straight line - it’s a dynamic cycle. Nobody grasped this better than John Boyd, the US Air Force Colonel and visionary strategist behind the OODA loop. Boyd’s model - Observe, Orient, Decide, Act - was developed to help fighter pilots out-think and out-manoeuvre their opponents. Over time, it became a blueprint for learning organisations everywhere: success comes from cycling through these stages faster and smarter
-
ACJ in Action: What Recent (2025) Research Reveals About Reliable Writing Assessment
Recent research (Gurel et.al 2025) dives into how adaptive comparative judgement (ACJ) performs when used to assess writing—and confirms that this approach is both robust and practical for organisations looking for fairer, more insightful assessment methods. What’s really important about these findings, and how do they connect to using ACJ tools like RM Compare?