Blog
Posts for category: Product
-
Existing customer announcement
RM Assessment, who provides RM Compare, is moving to a new legal entity from 1 June 2026.
-
Is your software high quality, how do you know – and does it matter?
When important decisions, results or reputations depend on a piece of software, “quality” stops being a nice‑to‑have. It becomes the question of whether that system will be there when people need it, and whether it can adapt as your needs change.
-
⏱️RM Compare | NOW is in BETA
We’re pleased to announce that ⏱️RM Compare | NOW is now in BETA. It is a lightweight RM Compare experience designed for quick judgements, allowing users to capture or upload an item, compare it to a ready-made standard, and review a score in just a few steps.
-
Is RM Compare an Assessment Software Application or Infrastructure?
When people first encounter RM Compare, they usually see it as what our homepage says it is: the world‑leading Adaptive Comparative Judgement system. That description is accurate, but as customers start integrating RM Compare more deeply, a different question often emerges: is this just another application we use, or is it actually part of our assessment infrastructure?
-
Coming Soon! ⏱️RM Compare | NOW
A lightweight RM Compare experience for quick judgements. Capture or upload an item, compare it to a ready-made standard, and review your score in just a few steps.
-
"How hard is this task?" - assessing difficulty
Comparative judgement is most commonly used to answer a simple question: Which of these pieces of work is better? Teachers and examiners compare two responses, choose one, and behind the scenes an algorithm turns many such decisions into a reliable rank order and a scale. That idea now underpins everything from trust‑wide writing assessments to high‑stakes awarding. The same engine can answer a different question: Which of these tasks is harder?
-
From pilots to products: how organisations can modernise without blowing everything up
Every exam body I talk to feels the same squeeze. Governments want innovation. Schools want recognition for richer work. Generative AI is crashing into the system from all sides. And yet, when results day comes, the only thing that really matters is whether the grades stand up in the media and in court.
-
New - Learning Progress Dashboard (Experiment)
At RM Compare, we believe that the true value of Comparative Judgement isn't just found in the final rank order (the product), but in the cognitive journey students take to get there (the process). Today, we are excited to share an experimental piece of work: the Learning Progress Dashboard.
-
New - Introducing the RM Compare Modular Ecoystem
For years, RM Compare has helped teachers, exam boards, universities and recruiters turn thousands of “which is better?” decisions into fair, reliable rankings of complex work. What began as a powerful way to compare pieces of work has now grown into something bigger: a complete ecosystem for creating, managing, and using gold‑standard judgements at scale. Today we’re introducing RM Compare | Live, Studio, Hub – three modules that work together as a continuous flywheel.