Higher-Ed

For Universities & Research Institutions

Select for Potential.
Teach for Mastery.

Admissions are compromised by AI. Student feedback is slow. RM Compare solves both: a valid way to assess applicant potential, and a pedagogical tool that empowers students to learn by evaluating their peers.

0.9+

Reliability Coefficient

Validated assessment data

+1 Grade

Learning Gain

For students who judge peers

100%

Feedback Agency

Develops metacognition
Admissions & Widening Participation

The Personal Statement is Dead.

Generative AI writes better personal statements than most 18-year-olds. Universities need a new, un-gameable way to assess student potential and "fit" beyond the grades.

The Solution: Use ACJ to judge portfolios, video submissions, or "show your working" artefacts. Because ACJ relies on holistic expert consensus, it spots the difference between AI polish and authentic human potential.

Holistic Admissions
The Platform in Action

Pedagogy & Practice

Learning by Evaluating

Professor Scott Bartholomew introduces his ground breaking research.

The Student Experience

Students explain the benefits of peer judgement

Teaching & Learning

Assessment *as* Learning

Providing detailed feedback on complex coursework for large cohorts is unsustainable. But peer assessment is often viewed as "low quality."

RM Compare changes the paradigm. By judging anonymized pairs of peer work, students are forced to engage critically with the success criteria. They internalize "what good looks like," improving their own subsequent performance by a full grade boundary (Purdue University).

Peer Learning Cycle
The "Flash" Review

Active Learning in 10 Minutes

Embed a short ACJ session into Canvas or Blackboard before a lecture.

Ask students to compare 3 anonymized pieces of work from previous years. In just 10 minutes, the entire cohort is calibrated on the standards before they even start writing. It turns passive listeners into active evaluators.

Active Learning
Research Offices & Funding

Democratise Grant Allocation

Distribute internal research funding fairly. Use ACJ to crowdsource the review of grant applications across the faculty.

⚡ Reduce Admin Burden

Stop relying on a small committee to read hundreds of applications. Distribute the load across the entire faculty. 20 judges doing 30 minutes of work provides better reliability than 2 judges spending days reading.

📊 Transparent Decisions

Remove the "Old Boys Club" bias from funding. The ACJ algorithm ensures every application is judged on merit, providing a mathematically defensible rank order for every grant awarded.

Grant Dashboard

Assessment for the Modern University

Proven in disciplines where "right" and "wrong" are not enough.

Architecture & Design Law & Argumentation Engineering Solutions Creative Writing

"Comparative Judgement has transformed how we assess students. It allows us to measure constructs that were previously impossible to grade reliably, and the peer learning aspect is phenomenal."

Caine Schools of the Arts, Utah State University