- Opinion
Skills England's New Framework – Why human judgement matters more than ever
In November 2025, Skills England released the UK Standard Skills Classification (UK‑SSC): a comprehensive mapping of 3,343 occupational skills, 4,926 knowledge concepts, 21,963 occupational tasks and 13 core skills, linked across occupations, qualifications and sectors. It is, on one level, a triumph of infrastructure thinking. For the first time, employers, course designers, policy makers and learners have a shared vocabulary to talk about what "skills" actually are.
But the release also exposes a tension that has been at the heart of UK education and skills policy for years: the tension between a complicated, reductionist representation of skills and the complex, emergent reality of how capabilities are actually developed and demonstrated.
This is where adaptive comparative judgement – and platforms like RM Compare – become strategically crucial.
The Problem with Ingredient Lists
The tensions emerged quickly. One education commentator captured it perfectly with an analogy.
"UK‑SSC is like going to a restaurant with a list of individual ingredients ("2 buns, 2 lettuce leaves, a slice of cheese, a vegetarian patty...") instead of ordering a meal. You get every component specified – but you lose any sense of how they fit together, what the whole thing tastes like, or why it matters. More problematically, you force the kitchen to become a short‑order operation with no sense of coherence or craft."
This is not a failure of Skills England's design. It is a structural inevitability. Skills England exists to serve labour‑market intelligence, workforce planning and occupational standards development. So its definitions are necessarily terse, task‑linked and job‑oriented. Even the core skills – "learning and investigating", "communication", "problem solving" – are profiled primarily through how they appear across occupations and jobs, not through their intrinsic value to learning, citizenship or human flourishing.
Take "communication": the SSC defines it as "the process of exchanging information through various means such as speech, writing, or other mediums." That is a throwback to mid‑20th‑century sender–receiver models. It ignores 70 years of communication theory, which emphasises co‑construction of meaning, relational and cultural dimensions, and power dynamics. It misses about 99% of what communication scholars and educators mean by the term.
Why? Because that definition was produced by a network of policymakers, economists, standards bodies and employer representatives who talk primarily in terms of productivity, occupational fit and labour‑market signals. This is Conway's Law applied to skills frameworks: the structure of the classification mirrors the communication structure and blind spots of the organisations that built it. The resulting ontology is complicated, decomposable and manageable – but not complex, nuanced or emergent.
Why That Matters – and What Gets Lost
The consequence is predictable: as LSIPs, awarding bodies, universities and employers start to align curricula, assessments and workforce development to the UK‑SSC, there is real risk that skills get flattened into checklist compliance. Did the learner tick off "communication (K.0831)"? Did the programme map to these five SSC domains? Have we covered the "core skills"? The framework optimises for measurability and interoperability – not for depth, imagination or the kind of tacit, embodied knowing that underpins real expertise.
RM Compare's core strength – adaptive comparative judgement – exists precisely in the space that SSC cannot reach. ACJ asks expert judges to take a holistic view: "which piece of work is better overall?" This is not a reductive process. Research consistently shows it captures complex capabilities that rubrics and tick‑boxes miss: judgment, creativity, disciplinary fluency, the ability to integrate multiple dimensions of knowledge and skill in ways that feel natural and expert.
The Opportunity: Being the Bridge
Here is where the new environment creates unprecedented opportunity. Institutions and system‑level actors need to align to SSC – for funding, for regulatory credibility, for interoperability with employers and other providers. But they also know that SSC definitions are thin and that real assessment has to preserve the texture of disciplinary knowledge and authentic capability.
RM Compare can become the bridge between those worlds. Specifically:
- Map complex judgment to standardised language. Use ACJ to judge authentic work – projects, portfolios, oracy, clinical or design performances – without imposing a rubric. Then, map the resulting rank orders and exemplars to SSC skills and core skills. This gives policymakers, LSIPs and employers the standardised data they need, while preserving the holistic, expert judgment that actually validates performance.
- Provide "translation" capabilities. Let institutions define rich, discipline‑specific learning outcomes and graduate attributes that matter deeply to their academic identity. Then generate SSC mappings as a separate layer for reporting and funding, not as the primary language of practice. This resists the reductionism while remaining interoperable.
- Supply trusted evidence to SSC‑driven processes. Skills England's roadmap and local skills improvement plans emphasise robust evidence of skills development, not just planning documents. RM Compare's research on reliability and scalability (e.g. combined ranks, on‑demand "rulers") positions it as a credible way to generate that evidence at scale – especially for complex, portfolio‑based assessment where traditional test psychometrics do not apply.
- Resist gaming and compliance creep. There is genuine risk that SSC‑linked funding and accountability encourage surface‑level compliance: design tasks that merely name SSC skills, optimise for coverage rather than depth. ACJ, by foregrounding human judgment and expert disagreement, makes it harder to game. Expert judges cannot easily be fooled into ranking shallow work highly if the task is genuinely complex and the judges are knowledgeable.
For Practitioners: What This Means Now
If you are a university programme leader, an awarding organisation, an employer or an LSIP convenor, the playbook is clear:
- Use UK‑SSC as a translation layer, not a design constraint. Design your curriculum, assessment and professional standards around what you believe learners and professionals need to know and be able to do. Then map that richly to SSC for alignment and reporting purposes.
- Invest in holistic judgment. Where performance is complex and multidimensional – capstones, portfolios, applied projects, clinical or design work – use adaptive comparative judgment. It is faster, more reliable and more defensible than point‑in‑time rubrics, and it captures dimensions that reductive checklists miss.
- Make the mapping transparent. Show learners, employers and regulators where your local conceptions of capability extend or even challenge the SSC. That is not a bug; it is a feature. It signals that you are not blindly following a formula, but thoughtfully integrating external frameworks into your own standards.
- Participate in SSC evolution. The classification is designed to iterate (next update in 2026, then on a 5‑year cycle) based on job vacancy data, expert feedback and community input. If you find that SSC definitions miss important dimensions of capability – as many educators will, especially around communication, creativity and critical inquiry – feed that back. The framework is meant to improve, not ossify.
The Larger Argument
UK‑SSC is a significant infrastructure intervention and it will drive alignment across the sector. But infrastructure is not the same as pedagogy, and a standardised classification is not the same as a rich curriculum or a meaningful assessment. The real value lies in using SSC to create common language and enable system‑level coordination, while protecting the spaces where educators, professionals and learners develop and demonstrate the complex, emerging capabilities that matter most.
RM Compare – and adaptive comparative judgement more broadly – is one of the few assessment approaches built explicitly for that tension. It works at the human, judgment‑based level while still speaking the standardised dialect. In a world where skills frameworks are getting more powerful (and more reductive), that bridging capacity is not a niche feature. It is essential infrastructure.
The assessment crisis in the age of AI is real. But so is the risk that we over‑correct by retreating into simplified, tickable, machine‑readable frameworks. The answer is not to reject standardisation. It is to ensure that human judgment, disciplinary expertise and holistic assessment sit at the heart of how we validate and develop capability – and that standardised frameworks serve that judgment, rather than replace it.
References
- https://www.gov.uk/government/publications/uk-standard-skills-classification-interim-development-report/the-uk-standard-skills-classification
- https://feweek.co.uk/skills-england-launches-beta-version-of-skills-classification-tool/
- https://compare.rm.com/blog/
- https://wonkhe.com/blogs/skills-england-has-a-new-way-to-talk-about-skills-and-the-sector-needs-to-listen/
- https://assets.publishing.service.gov.uk/media/652fdb9d92895c0010dcb9a5/A_skills_classification_for_the_UK.pdf
- https://skillsengland.blog.gov.uk/2025/11/27/introducing-the-uk-standard-skills-classification-a-new-way-to-explore-skills-and-occupations-by-frank-bowley/
- https://www.gov.uk/government/publications/uk-standard-skills-classification-interim-development-report/summary-of-methodology
- https://www.structural-learning.com/post/communication-theories
- https://assets.publishing.service.gov.uk/media/5d71187ce5274a097c07b985/21st_century.pdf
- https://en.wikipedia.org/wiki/Conway's_law
- https://www.niskanencenter.org/conways-law-at-government-scale/
- https://compare.rm.com/blog/2022/01/an-introduction-to-adaptive-comparative-judgement-acj/
- https://compare.rm.com/blog/2024/06/new-research-validates-rm-compare-ranks-as-effective-on-demand-rulers/
- https://rethinkingassessment.com/rethinking-blogs/comparative-judgement-and-the-transformative-power-of-holistic-assessment/
- https://en.wikipedia.org/wiki/Adaptive_comparative_judgement
- https://compare.rm.com/blog/2025/09/welcoming-the-new-direction-rm-compare-at-the-heart-of-skills-based-assessment/
- https://compare.rm.com/education/
- https://compare.rm.com/blog/2025/04/rm-compare-assessment-as-learning-developing-tacit-knowledge-of-what-good-looks-like/
- https://www.gov.uk/government/news/skills-england-unveils-roadmap-for-local-skills-to-drive-growth
- https://www.gov.uk/government/publications/local-skills-improvement-plans/guidance-for-developing-a-local-skills-improvement-plan-lsip
- https://compare.rm.com/blog/2025/06/tackling-reliability-in-adaptive-comparative-judgement-what-rm-compare-users-need-to-know/
- https://www.rm.com/assessment/services/adaptive-comparative-judgement
- https://www.gov.uk/government/publications/uk-standard-skills-classification-interim-development-report/extending-the-standard-skills-classification
- https://www.gov.uk/government/publications/uk-standard-skills-classification-interim-development-report/updating-the-standard-skills-classification
- https://www.gov.uk/government/publications/uk-standard-skills-classification-interim-development-report/use-cases