A Decade of Change: How Research Interest in Comparative Judgement Has Evolved

The world of assessment is changing-and if you’ve spent any time exploring the RM Compare blog, you’ll know that Adaptive Comparative Judgement (ACJ) is at the heart of that transformation. But how has the research community’s interest in Comparative Judgement (CJ) changed over the past ten years? Let’s take a look at the journey.

From Niche to Mainstream

Ten years ago, CJ was still a relatively niche concept in the world of educational assessment. Early research focused on establishing the method’s reliability and validity-could judges really produce trustworthy results by simply comparing pairs of student work rather than assigning marks? The answer, as it turned out, was a resounding yes. These foundational studies laid the groundwork for broader adoption.

Expanding Horizons

As confidence in CJ grew, so did the range of its applications. Researchers began testing CJ in new subjects and contexts: from mathematics problem-solving and creative writing to art, design, and even music. The focus shifted from “does it work?” to “where else can it work?” This period saw a surge in large-scale empirical studies, with schools and awarding bodies piloting CJ for everything from classroom moderation to national competitions.

Methodological Maturity

With wider adoption came new questions. Over the past five years, research has delved deeper into the cognitive processes behind CJ-how do judges make decisions? What’s the impact of expertise, or even using non-expert or peer judges? Studies also began to explore the efficiency and predictive value of CJ, especially compared to traditional marking.

At the same time, the research community started to critically appraise CJ’s limitations. How should it be used in high-stakes assessments? What are the best practices for judge training and feedback? This period of methodological maturity has led to more nuanced, practical guidance.

Digital Transformation and Innovation

The last few years have seen an explosion of interest, driven by digital platforms like RM Compare. Technology has made it possible to run CJ sessions at scale, with features like adaptive algorithms, instant reporting, and flexible judging role. The integration of AI and crowdsourced judging has opened up new avenues for research and practice, making assessment more inclusive, efficient, and insightful.

Where Are We Now?

Today, CJ is no longer just an alternative to traditional marking-it’s a proven, scalable solution for both formative and summative assessment. The research community continues to innovate, exploring everything from dimension-based CJ (where judges focus on specific aspects of work) to the use of CJ in setting grade boundaries and moderating teacher assessments.

What’s clear is that interest in CJ is stronger than ever. The research has moved from foundational questions to practical implementation, critical reflection, and technological innovation. As assessment priorities shift towards fairness, reliability, and meaningful feedback, CJ-and platforms like RM Compare-are leading the way.

Want to learn more?

Want to see how CJ could transform your assessment practice? Explore our help centre, try a demo session, or dive into more case studies and thought pieces right here on the RM Compare blog.