Blog
Posts for category: Product
-
Time for Tea?
Just like everyone else in the UK we are getting very excited about National Tea Day which takes place on the 21st April. The day is a great opportunity to share tea brewing preferences, and the strength of the perfect 'cuppa' is always hotly debated. So we though it would be interesting to get the view of AI.
-
"How hard is this task?" - assessing difficulty
Comparative judgement is most commonly used to answer a simple question: Which of these pieces of work is better? Teachers and examiners compare two responses, choose one, and behind the scenes an algorithm turns many such decisions into a reliable rank order and a scale. That idea now underpins everything from trust‑wide writing assessments to high‑stakes awarding. The same engine can answer a different question: Which of these tasks is harder?
-
From pilots to products: how organisations can modernise without blowing everything up
Every exam body I talk to feels the same squeeze. Governments want innovation. Schools want recognition for richer work. Generative AI is crashing into the system from all sides. And yet, when results day comes, the only thing that really matters is whether the grades stand up in the media and in court.
-
New - Learning Progress Dashboard (Experiment)
At RM Compare, we believe that the true value of Comparative Judgement isn't just found in the final rank order (the product), but in the cognitive journey students take to get there (the process). Today, we are excited to share an experimental piece of work: the Learning Progress Dashboard.
-
New - Introducing the RM Compare Modular Ecoystem
For years, RM Compare has helped teachers, exam boards, universities and recruiters turn thousands of “which is better?” decisions into fair, reliable rankings of complex work. What began as a powerful way to compare pieces of work has now grown into something bigger: a complete ecosystem for creating, managing, and using gold‑standard judgements at scale. Today we’re introducing RM Compare | Live, Studio, Hub – three modules that work together as a continuous flywheel.
-
RM Compare has grown up
What began as a powerful way to compare pieces of work is now a complete, production‑ready layer of judgement infrastructure that sits underneath your existing systems and makes subjective assessment fair, fast, and repeatable at any scale. We’ve reached the point Geoffrey Moore calls the “whole product”: not just the clever core technology, but everything wrapped around it that a mainstream organisation needs to trust it with real‑world stakes.
-
New! Create interactive reports with your LLM (Experiment)
All RM Compare sessions can allow you to extract very detailed data sets (Advanced and Enterprise Plans). We already provide you with some great reporting, however the latest Generative AI tools might be able to give you even greater insights.
-
Preparing for a New Era of Trust Accountability: How RM Compare Supports MAT Leaders Through Statutory Inspections
The education landscape for multi-academy trusts is about to change fundamentally. With the government confirming statutory inspections of academy trusts from as early as 2027, MAT leaders now face a critical question: Is your assessment strategy inspection-ready?
-
📳Introducing the new RM Compare Companion App
The RM Compare Companion App transforms professional judgment into a mobile-first experience. Quickly grab and add Items to your RM Compare sessions.