Blog
Posts for category: Product
-
From pilots to products: how organisations can modernise without blowing everything up
Every exam body I talk to feels the same squeeze. Governments want innovation. Schools want recognition for richer work. Generative AI is crashing into the system from all sides. And yet, when results day comes, the only thing that really matters is whether the grades stand up in the media and in court.
-
New - Learning Progress Dashboard (Experiment)
At RM Compare, we believe that the true value of Comparative Judgement isn't just found in the final rank order (the product), but in the cognitive journey students take to get there (the process). Today, we are excited to share an experimental piece of work: the Learning Progress Dashboard.
-
New - Introducing the RM Compare Modular Ecoystem
For years, RM Compare has helped teachers, exam boards, universities and recruiters turn thousands of “which is better?” decisions into fair, reliable rankings of complex work. What began as a powerful way to compare pieces of work has now grown into something bigger: a complete ecosystem for creating, managing, and using gold‑standard judgements at scale. Today we’re introducing RM Compare | Live, Studio, Hub – three modules that work together as a continuous flywheel.
-
RM Compare has grown up
What began as a powerful way to compare pieces of work is now a complete, production‑ready layer of judgement infrastructure that sits underneath your existing systems and makes subjective assessment fair, fast, and repeatable at any scale. We’ve reached the point Geoffrey Moore calls the “whole product”: not just the clever core technology, but everything wrapped around it that a mainstream organisation needs to trust it with real‑world stakes.
-
New! Create interactive reports with your LLM (Experiment)
All RM Compare sessions can allow you to extract very detailed data sets (Advanced and Enterprise Plans). We already provide you with some great reporting, however the latest Generative AI tools might be able to give you even greater insights.
-
Preparing for a New Era of Trust Accountability: How RM Compare Supports MAT Leaders Through Statutory Inspections
The education landscape for multi-academy trusts is about to change fundamentally. With the government confirming statutory inspections of academy trusts from as early as 2027, MAT leaders now face a critical question: Is your assessment strategy inspection-ready?
-
📳Introducing the new RM Compare Companion App
The RM Compare Companion App transforms professional judgment into a mobile-first experience. Quickly grab and add Items to your RM Compare sessions.
-
The 90% Problem: Why "Dip Sampling" Can No Longer Protect Your Provision
The 2025 apprenticeship assessment reforms have shifted responsibility for quality assurance decisively toward training providers. With the launch of Skills England and new flexibility in assessment delivery, providers are no longer just preparing learners - they are increasingly validating them.
-
Designing Healthy RM Compare Sessions: Build Reliability In, Don’t Inspect It In
The best way to avoid the “we worked hard and reliability is low” moment is to design sessions so that health is baked in from the start. Healthy sessions are not accidents: they result from clear purpose, good task design, well‑briefed judges, and enough comparisons to let the model discover a shared scale of quality. This post turns what you now know about rank order, judge misfit and item misfit into concrete design principles you can apply before, during and after a session.