- Data Sovereignty
Beyond GDPR: Using the UK AI Product Safety Standards to Judge Your Assessment Tools
In this series, we’ve been arguing that sovereignty is the only sustainable path for professional judgment in an agentic world. We started by looking at the infrastructure we’ll need for sovereign intelligence, then at the difference between sovereign systems and AI wrappers, and most recently at dark patterns that steer people into feeding someone else’s models.
The UK government has now quietly handed education a new lever: its Generative AI: product safety standards. These don’t just repeat GDPR. They spell out what “safe” looks like when AI is in the loop, especially around privacy, intellectual property, and manipulative design.
This post is about how those standards give schools, MATs and ministries a way to operationalise the ideas we’ve been exploring so far.
From “Are we compliant?” to “Whose intelligence are we building?”
If you work in education, you’ve probably heard the same reassurance many times: “Don’t worry, we’re fully GDPR compliant.”
In practice, that often means: the paperwork exists, the DPO is satisfied, and the checkbox is ticked.
But the questions we’ve been raising in this series are different:
- Who controls the infrastructure where professional judgment is turned into data?
- Does your students’ work power your agents, or someone else’s?
- Are you choosing AI, or being nudged into it?
The new UK standards sit squarely in that space. They don’t just ask whether data is processed lawfully; they ask whether learners’ work is being used to build somebody else’s product, and whether users are being manipulated into saying yes.
In other words, they take the sovereignty concerns we’ve been discussing and give them regulatory teeth.
Three conversations the standards force you to have
You don’t need to quote the guidance at vendors. Instead, you can use it to frame three very human conversations about AI assessment tools.
1. “What are you doing with our learners’ work?”
In our post on AI wrappers, we showed how easy it is for a product to route your students’ scripts through a third‑party model and then reuse that content – even “anonymised” – to improve its own AI and publish impressive results.
The UK standards cut through that by saying: you must not use learners’ or teachers’ intellectual property for commercial purposes, including model training and product development, without proper consent from the people who own that IP.
So the conversation becomes:
- Are you using our scripts, portfolios or audio beyond the immediate purpose of our project?
- If so, who has actually agreed to that – and where?
A sovereignty‑first answer sounds like: “No, by default we don’t treat your learners’ work as fuel for our roadmap. If you ever want to take part in research or co‑development, that’s a separate and explicit choice.”
That is very different from the familiar “we may use anonymised extracts for research and improvement” buried deep in a privacy page.
2. “Can you show us, simply, where our data goes?”
When we wrote about building global infrastructure for professional judgment, we argued that organisations need to treat their judgment streams like a strategic asset: they should know where that data lives, where it flows, and which agents can act on it.
The standards take a similar line for safety. They expect products to explain, in accessible language:
- what data they collect,
- where it is processed and stored,
- which external services are involved, and
- how long it is kept.
If a vendor needs three meetings and a stack of PDFs to answer “where does my student’s script go when I turn AI on?”, that’s a problem. Sovereign organisations should be able to point to a single page or diagram and say: “Here is the path. Here are the jurisdictions. Here are the sub‑processors. Here is where it stops.”
That isn’t just about legal comfort; it’s about being able to plug that judgment stream into your own graph and your own agents in future, without surprises.
3. “Is AI a genuine choice, or the only ‘normal’ option?”
In the dark‑patterns post, we showed how consent screens can make AI feel like the only grown‑up option: big, colourful “Enable AI (recommended)” buttons, tiny “continue without AI” links, and benefit‑heavy language with the hard questions buried elsewhere.
The UK standards call this out explicitly. They warn against designs that manipulate users into actions they did not intend.
So the conversation shifts from “Do you have a consent screen?” to “How does that screen behave?”:
- Do staff see AI and human‑only options presented with equal weight?
- Do they see, on that same screen, a short explanation of what changes when they say yes?
- Can they reverse the decision later without losing access or data?
- Can they run high‑stakes projects without AI, without being treated as second‑class users?
If the honest answer to those questions is “not really”, then whatever the paperwork says, the product is not treating your sovereignty – or your users’ autonomy – with much respect.
What “good” looks like if you care about sovereignty
Seen through the lens of this series, the UK standards define a floor. Sovereignty defines a ceiling.
A product that is merely compliant might still:
- host all judgment data in someone else’s cloud, under someone else’s ultimate control;
- reuse “anonymised” student work to build its own AI advantage;
- present AI as the only realistic option in practice.
A sovereignty‑aligned product, by contrast, will try to guarantee three things:
- Your work builds your intelligence.
Scripts, judgments and feedback are not quietly diverted into training a vendor’s proprietary models or benchmarks. The compound value stays with you. - You can see – and change – the data path.
It is easy to understand where data goes today, and realistic to move more processing into your own cloud or onto your own devices tomorrow. - Consent is an honest moment, not a funnel.
People can say yes or no to AI without being shamed, tricked, or locked out of core capabilities.
Those are design principles as much as legal ones.
Where RM Compare is trying to stand
As we move further into an agentic world, we’ve had to decide what kind of AI product we are willing to become.
So far, that has led us to a few non‑negotiables:
- We do not use candidates’ work to train RM‑owned models. Your judgments are not our training data.
- We design for a future where your judgment streams live in your graph, under your agents, not trapped in our portal.
- As we experiment with AI‑adjacent features, we are deliberately avoiding the dark patterns we’ve described: AI is never presented as the only “real” option, and the data story has to be visible at the same moment as the choice.
Will we get every detail right first time? Almost certainly not. But the standards are helpful here: they give us a public reference point against which you can hold us to account.
A simple way to use this with vendors
If you are talking to any AI assessment provider – including us – you don’t have to bring a policy binder. You can just say:
- “Show us, on one page, what happens to a student’s work when we turn AI on.”
- “Tell us, in one paragraph, what you are allowed to do with that work beyond our immediate project.”
- “Open your AI consent screen and walk us through every design choice on it.”
If the answers feel clear, bounded and reversible, you are probably in safe territory. If they feel hedged, complicated, or strangely one‑sided, the standards – and the sovereignty concerns we’ve been exploring – give you permission to say: “Not like this.”
In our piece on the sovereign exit, we argued that portability is the ultimate trust test: if you can’t take your judgment data and use it elsewhere, you never really owned it. The UK product safety standards give you another way to ask the same question in the present tense: who is allowed to use your learners’ work, where, and for whose benefit?
Because in an agentic world, the question is no longer just “Are we compliant?” It’s “Whose intelligence are we actually building?”