Blog
Posts for category: Opinion
-
Post‑16 pathways reform: three assessment questions AOs and providers need to answer now
The government’s post‑16 Level 3 and below reforms are no longer abstract policy; they are now a concrete redesign of the 16–19 landscape. A Levels, T Levels, new V Levels and two reformed Level 2 pathways will replace a crowded field of overlapping qualifications. In that world, assessment quality, standards and progression evidence stop being technical details and become existential questions for awarding organisations and providers.
-
What Might an Appeals Process Look Like with RM Compare?
This post sketches what a fair, defensible appeals process can look like when RM Compare sits at the heart of grading. The short version is that appeals do not disappear; they become more transparent and more evidence‑rich, because every judgement and every decision point is logged.
-
Workload, Grades and Transparency: Why ACJ Needs a Different Starting Point
When ACJ is judged by rubric‑first assumptions, it will always look like a poor imitation of traditional marking: more judgements, awkward grade mapping, fewer boxes ticked. If we instead start from its own epistemology - holistic, relative, expert judgement as the primary evidence of quality - then workload becomes a design and scheduling question, not an intrinsic flaw.
-
What does a four‑year‑old Clumber Spaniel named Bruin have in common with Olympic breakdancing and modern examinations?
On the surface, nothing at all. Bruin is a gentle, long‑backed gundog who has just trotted his way to Best in Show at Crufts, padding calmly down the famous green carpet while the NEC holds its breath. Breakers will spin and freeze their way across an Olympic floor to pounding music. Examiners sit alone with stacks of scripts and detailed mark schemes. Three very different worlds, three very different kinds of performance.
-
If AI Is Serious About Learning Outcomes, ‘Ground Truth’ Has to Mean More Than Last Year’s Exam Scores (Part 2/2)
In Part 1 of this series, we asked who gets to define “learning” in an AI world and argued for a human‑grounded validity layer alongside AI‑native analytics. That conversation becomes very concrete when you look at one small, easy‑to‑miss element in OpenAI’s Learning Outcomes Measurement Suite diagram
-
Who Gets to Define “Learning” in an AI World? (Part 1/2)
OpenAI’s new “Learning Outcomes Measurement Suite” is more than a product announcement; it is a bid to define how AI‑mediated learning will be measured – and, by implication, what will count as learning in the years ahead
-
Newsletter March 2026
For this edition we have produced a couple of series focusing on two critical considerations in the world of assessment, and education more generally. The 'AI World' is moving fast and finding time for necessary thinking and consideration has never been more important.
-
Roadmap Update - Compass and maps
In the world of education and assessment, the terrain shifts daily. A “map” - whether it’s a static dashboard or a rigid three-year product plan - is only truly useful if the world stands still. But when the environment moves faster than your plan, a map quickly becomes a liability.
-
The Recruitment Arms Race: Why We Are Losing the "Signal in the Noise"
The latest edition of the Inside your Ed podcast, titled "Why are so many graduates struggling to find a job?", highlights a growing crisis in the graduate labour market. While much of the conversation focuses on economic cooling, a key section reveals a more systemic failure: the total collapse of the traditional hiring process.