Newsletter May 2026
There has never been a more uncertain time in education. There has never been a more exciting time to be in education.
There has never been a more uncertain time in education. There has never been a more exciting time to be in education.
We’re pleased to announce that ⏱️RM Compare | NOW is now in BETA. It is a lightweight RM Compare experience designed for quick judgements, allowing users to capture or upload an item, compare it to a ready-made standard, and review a score in just a few steps.
Just like everyone else in the UK we are getting very excited about National Tea Day which takes place on the 21st April. The day is a great opportunity to share tea brewing preferences, and the strength of the perfect 'cuppa' is always hotly debated. So we though it would be interesting to get the view of AI.
In 1792, revolutionaries in Paris abolished a king, Americans calmly re‑elected a president, and Cambridge quietly invented something that still shapes millions of lives every year: exam marking. While politics and industry were being rebuilt in public, assessment was being rebuilt on paper. Two centuries later, we are still living inside that decision – and only now starting to see its limits.
There has never been a more uncertain time in education. There has never been a more exciting time to be in education.
RM Assessment, who provides RM Compare, is moving to a new legal entity from 1 June 2026.
Every few years, a body of evidence emerges from an unexpected direction that turns out to be exactly what education needed to hear. We think this might be one of those moments.
When important decisions, results or reputations depend on a piece of software, “quality” stops being a nice‑to‑have. It becomes the question of whether that system will be there when people need it, and whether it can adapt as your needs change.
AI assessment has a dignity problem. Not a technology problem, or even just a fairness problem, but a problem with how it treats people at precisely the moments they are most exposed and most human.
Assessment systems often talk about standards, but too often those standards remain abstract. Teachers, examiners and assessors are expected to align to a wider benchmark, yet in day-to-day practice they usually see only the work directly in front of them: their own class, their own cohort, their own centre. That gap matters.
We’re pleased to announce that ⏱️RM Compare | NOW is now in BETA. It is a lightweight RM Compare experience designed for quick judgements, allowing users to capture or upload an item, compare it to a ready-made standard, and review a score in just a few steps.
If the first post in this series traced the industrial birth of marking, the second described the temptation to use AI as a faster horse, and the third argued for rediscovering human judgement, this final post asks the practical question: what would assessment look like if it were designed for the world now emerging rather than the one that produced marks and grades?
If the first post argued that marking is a child of the Industrial Revolution, and the second showed how AI is mostly being used to build faster horses, this third post is about something older and more fundamental: human judgement itself.