The Global Standard for Adaptive Comparative Judgement
Replace guesswork with
validated human consensus.
Whether assessing student portfolios, hiring talent, or awarding grants — RM Compare harnesses the power of "Tacit Knowledge" to produce reliability data that rubrics and AI cannot match.
For Schools
Assess Creativity & Oracy at scale
Move beyond "tick-box" rubrics. Reliably assess holistic skills and standardise grading.
Explore Education →For Recruitment
Stop Screening, Start Ranking
CVs are broken by AI. Use ACJ to rank candidates based on authentic work samples.
Explore Hiring →For Award Orgs
Defensible Standards
Solve the "Examiner Crunch." Secure grade boundaries and prevent inflation.
Explore Awarding →For Universities
Admissions & Peer Learning
Empower students to learn by evaluating peers. Validated admissions assessment.
Explore Higher Ed →See the Platform in Action
Assessment in the Age of AI
See why traditional assessment is struggling, and how Comparative Judgement provides the solution.
RM Compare 1 minute explainer
Brilliantly simple. Awesome power.
Backed by science and deep research
Professor Scott Bartholomew explains why ACJ is so powerful.
Trusted by world-leading organizations
The Science of Comparative Judgement
Why "Better/Worse" beats "7 out of 10"
Humans are notoriously bad at absolute judgement. We struggle to consistently score a piece of work against a complex rubric.
However, we are highly accurate at Relative Judgement. We can instantly spot which of two items is "better."
1
Upload Items
Portfolios, video interviews, essays, code, or audio. Any file type is accepted.
2
Compare Pairs
Judges see two items side-by-side and simply decide: "Which is better?"
3
Get the Truth
Our algorithm generates a highly reliable rank order (0.9+ reliability) and valid performance data.
RM Compare is developing fast
Stay up to date with the very latest information
-
Restoring Trust in Meritocracy: How organisations Are Fighting Back Against the Assessment Crisis (Blog 3 / 3)
In our first two blog posts, we documented the crisis: groundbreaking research from Princeton and Dartmouth showing how generative AI has destroyed the signalling value of written applications , and explored why this threatens educational meritocracy and demands new assessment approaches . The evidence is overwhelming. The consequences are severe. Traditional written signals no longer work.
-
When Written Applications No Longer Signal Ability: "This is a crisis" (2/3)
The rise of generative AI has disrupted far more than how we write - it has fundamentally broken a mechanism that labour markets, universities, and employers have relied on for decades to identify talent. Groundbreaking new research from Princeton and Dartmouth reveals just how profound this disruption is, and why solutions like RM Compare's Adaptive Comparative Judgement are now essential.
-
Meritocracy in selection - the efficiency paradox of selecting and hiring in the age of AI (1/3)
The rise of generative AI in recruitment able to craft convincing CVs, cover letters, and even interview responses has triggered profound changes in how talent is identified and selected. With traditional signals of merit increasingly at risk of manipulation or obsolescence, forward-thinking organizations are searching for dependable, evidence-driven alternatives to ensure fairness and real quality in assessment.