- Opinion
ChatGPT Three Years On: What Next for Assessment?
Three years on: what’s really changing?
Three years after the launch of ChatGPT, generative AI is reshaping assessment, but the picture is neither apocalyptic nor trivial. Students and teachers are using AI tools daily, and assessment providers like RM Assessment are working hard to turn this disruption into lasting improvement across fairness, speed, and insight.
...and then Chat GPT was released...
Assessment Timeline (1950 - Present)
The risks—and what matters most
AI brings opportunities, but it also surfaces new risks:
- Authenticity challenges: AI-written responses can undermine confidence in who did the work.
- Trap of text-only tasks: Relying exclusively on written responses invites generic outputs from AI.
- Opaque scoring: Black-box marking risks damaging trust in outcomes.
- Unequal access: Not everyone has the same tools or skills to use AI, raising fairness questions.
RM Assessment approaches these risks as practical, designable challenges, not insurmountable threats.
Where AI is making assessment better
By researching and building a modern AI marking capability, RM Assessment is aiming for smarter, faster, and more consistent feedback at scale. RM’s ongoing investment in AI is visible not only in advanced proof-of-concept deployments but also in how the technology is being safely, incrementally rolled out under real-world conditions—always with robust human oversight.
- AI helps teachers and examiners provide faster, richer feedback.
- AI supports more varied and inclusive task types, not just essays or multiple choice.
- RM’s proof-of-concept AI marking pilots show that, especially when aligned with human consensus, reliability and speed can increase, while maintaining fairness
RM Ava: A unified platform for assessment
These innovations are coming together in RM Ava, a next-generation digital assessment platform that brings together RM’s world-class tools, including RM Compare, onto a single, cloud-based system. RM has invested heavily to ensure Ava supports the full assessment lifecycle, from creation to delivery, AI-supported marking, and actionable feedback. Ava’s design is modular, scalable, and ready to grow as technologies and customer expectations evolve.
- RM Ava unifies assessment processes from building tests to secure digital delivery and AI-enhanced marking all in one place.
- RM partners with institutions worldwide, including recent contracts with the International Baccalaureate and Cambridge University Press and Assessment, to co-develop new best practices for digital and AI-powered assessment
A partnership built on trust, evidence, and transparency
At the centre of RM Assessment’s approach is a commitment to “AI as enhancement, not replacement.” Key principles guide this:
- Human consensus remains essential: Even as AI scales up, RM Compare continually calibrates and audits the system against what expert markers agree is good work.
- Radical transparency: RM publishes evidence about its AI’s strengths and weaknesses, offers clear ways for partners to challenge or override decisions, and works with regulators to ensure accountability.
- Co-design with the sector: RM works hand-in-hand with schools, exam boards, and professional bodies, piloting technology, sharing research, and learning together.
RM Assessment’s vision is neither hype nor denial. It’s a partnership model underpinned by clear investment in research, rigorous validation, and an openness to sector oversight. Through RM Ava, RM Compare, and a growing AI marking capability, RM Assessment is helping shape digital assessment responsibly for a new era