RM Compare Newsletter - July 2025
Welcome to the latest RM Compare newsletter, this time with 10 new posts to dig into.
Welcome to the latest RM Compare newsletter, this time with 10 new posts to dig into.
Across industries and institutions, those responsible for recruitment and admissions are wrestling with a perfect storm: soaring application numbers, the rapid rise of AI-generated content, and the ever-present need for fairness and authenticity. The result? Recruiters and admissions teams are overwhelmed, and the risk of missing out on genuine talent has never been higher.
Change is a constant in today’s business landscape. Whether it’s adapting to new technologies, evolving customer expectations, or shifting internal processes, organisations need tools that help them navigate transformation smoothly and confidently. At RM Compare, we believe that effective change starts with better decision-making—and that’s exactly where our platform shines.
We have produced some new guides to help with some of the RM Compare processes. You can view them online or download and print if you prefer. We have started off with 3 key processes but will add more as we progress.
In the 9th edition of the E-Assessment Awards, RM Compare was recognised again for it's innovative approach and high quality.
In June 2025, Apple published a landmark study that has sent shockwaves through the AI and assessment communities. The research, titled The Illusion of Thinking, rigorously tested the reasoning abilities of the most advanced AI models—so-called large reasoning models (LRMs) from OpenAI, Google, Anthropic, and others—using a series of classic logic puzzles designed to scale in complexity. The findings have profound implications for how we understand AI’s capabilities, especially in the context of educational assessment.
A bumper edition with THIRTEEN new posts - that takes us to well over 200 blog posts in total. This month we are looking forward to attending the international E Assessment Awards where we are a finalist in the most innovative solution category. As part of this we have created a new Innovation blog series which we will be adding to over the coming weeks.
In the rapidly evolving world of educational assessment, efficiency and innovation are no longer just buzzwords—they’re necessities. If you’re using RM Compare, you already know the power of Adaptive Comparative Judgement for delivering robust, scalable assessment. But what if you could take your productivity and creativity to the next level? Enter the Large Language Model (LLM).
We get asked a lot of questions about RM Compare and AI - here are some of the common ones together with our responses
In today’s world of rapid technological change, the dominant narrative is that innovation should serve efficiency. “Get more done for less” is the rallying cry, especially in the age of AI, where speed, automation, and optimisation are often seen as the ultimate goals. But what if this relentless pursuit of efficiency is not just limiting, but counterproductive?
The faster we learn the quicker we can produce valuable software while maintaining the very highest quality. This understanding underpins innovation.
There may be times you will want to work with your own IT support - here are some things to consider
RM Compare is an online system - there are a number of options for getting content digitised and uploaded. Follow our step-by-step walkthroughs to learn more.