Blog
Posts for category: Research
-
How to enjoy fries on the beach, undisturbed by seagulls: Surprising Truths from a recent ACJ Study
A recent study (2025) from Jeffrey Buckley (Technological University of the Shannon) and Caiwei Zhu (Delft University of Technology) set out to answer a critical question for anyone seeking fairness and efficiency in educational assessment: How feasible is Adaptive Comparative Judgement (ACJ) when deployed in real classrooms?
-
Turning Research into Action: The OODA Loop at the Heart of RM Compare’s Innovation
Innovation isn’t a straight line - it’s a dynamic cycle. Nobody grasped this better than John Boyd, the US Air Force Colonel and visionary strategist behind the OODA loop. Boyd’s model—Observe, Orient, Decide, Act—was developed to help fighter pilots out-think and out-manoeuvre their opponents. Over time, it became a blueprint for learning organisations everywhere: success comes from cycling through these stages faster and smarter
-
ACJ in Action: What Recent (2025) Research Reveals About Reliable Writing Assessment
Recent research (Gurel et.al 2025) dives into how adaptive comparative judgement (ACJ) performs when used to assess writing—and confirms that this approach is both robust and practical for organisations looking for fairer, more insightful assessment methods. What’s really important about these findings, and how do they connect to using ACJ tools like RM Compare?
-
Update from the most comprehensive analysis of Adaptive Comparative Judgement ever undertaken
The National Science Foundation (NSF) “Learning by Evaluating” (LbE) study (Award #2101235) is the most comprehensive analysis of Adaptive Comparative Judgement (ACJ) ever undertaken—an investment of $1.257 million over five years, spanning world-class universities and K-12 settings.
-
Tackling Reliability in Adaptive Comparative Judgement: What RM Compare Users Need to Know
If you’ve been following the evolution of digital assessment, you’ll know that Adaptive Comparative Judgement (ACJ) is transforming how we judge quality—especially with platforms like RM Compare. But you might also have heard about concerns over “inflated reliability statistics.” Is this something to worry about? Let’s look at what the research says, and why RM Compare users can be reassured.
-
New Research Confirms: RM Compare Delivers Reliable and Fair Spoken Language Assessment
The landscape of language assessment is evolving rapidly, especially with the 2020 update to the Common European Framework of Reference for Languages (CEFR-CV). This update redefines intelligibility—now encompassing both actual understanding and the perceived ease of understanding—making it a more holistic and communicative measure. But how do we assess this complex construct reliably and at scale? Enter Adaptive Comparative Judgement (ACJ) and, specifically, RM Compare.
-
A Decade of Change: How Research Interest in Comparative Judgement Has Evolved
The world of assessment is changing-and if you’ve spent any time exploring the RM Compare blog, you’ll know that Adaptive Comparative Judgement (ACJ) is at the heart of that transformation. But how has the research community’s interest in Comparative Judgement (CJ) changed over the past ten years? Let’s take a look at the journey.
-
Exploring the relationship between value and original content
This interesting and novel piece of research brought together two new, innovative products to explore the relationship between content originality and value.
-
Preparing for the Jobs of Tomorrow: How RM Compare Aligns with Future Skills Demands
As we look towards 2030, the global job market is set to undergo significant transformation. The World Economic Forum's Future of Jobs Report 2025 highlights key skills that will be crucial for the workforce of the future.