Who Owns Student Work When AI Is in the Loop? (2/4)

When people talk about AI in assessment, the conversation usually goes straight to marking accuracy, bias or workload. Underneath all of that sits a quieter question that is just as important: who actually owns the work being processed – and what does that ownership mean once AI is involved?

You can read the Government report (March 2026) here.

This post looks at student work as creative property, not just “data”; explores how AI changes (and doesn’t change) the picture; and sets out how we’re trying to respect that in RM Compare.

Students as creators, not raw material

Before AI enters the picture, the starting point is simple. When a student writes an essay, produces an artwork, records a performance or assembles a portfolio, they are usually the author of that work. In most UK school and college settings, that means the student is the first owner of copyright. The school or trust may hold copies for teaching, assessment and administration, but that doesn’t magically transfer ownership of the work itself.

In other words, student work is not just “input to a system”. It is creative work, belonging to a person, being handled for particular purposes. Copyright law is built on the idea that creators should have some say over how their work is used – and that basic idea does not vanish because the work happens to travel through an AI‑enabled platform.

What AI changes – and what it doesn’t

Putting AI into the assessment workflow adds new steps, but it doesn’t overturn those fundamentals.

Consider a student who drafts an essay, uses an AI tool for grammar suggestions or idea prompts, and then edits and submits a final piece. That final piece is still a human‑authored work. The student remains the author and right holder. AI assistance does not somehow make the work “belong” to the tool, nor does it dissolve the student’s status as creator.

Where AI does make a difference is in the range of things that can be done with student work once it has been digitised. There is a crucial distinction between using work to assess learning here and now, and using the same work to create models that will outlive the original assessment.

On one side, there is what you might call assessment use: presenting work to markers or judges, running comparative judgement, calculating scores and reliability statistics, generating reports, and archiving or exporting results. All of that is directly tied to the purpose the work was collected for: assessing what a student knows and can do.

On the other side, there is training use: feeding scripts into a model so that it can provide feedback to future cohorts; fine‑tuning a grading model that will continue to be used after this particular series is over; or using past work to build a commercial product that other customers can buy. These activities involve turning student work into a resource for future systems, not just finishing the job you started.

From a copyright and trust perspective, those two categories are not the same. The first is about delivering the service you signed up for. The second is about creating new assets and capabilities – and that should be treated as a separate decision, with its own justification.

It is also worth tackling a common assumption head on: the idea that once work is anonymised, ownership somehow evaporates. Removing names and identifiers is vital for privacy and data protection, but it does not change who created the piece of writing or the artwork. If an anonymised essay is still a recognisable, original text, it remains a copyrighted work. Using it to train a model is still using someone’s creative output, even if you cannot easily tell whose.

How we’re thinking about ownership in RM Compare

Given that backdrop, we’ve tried to keep our stance on student work straightforward and conservative.

Inside RM Compare, we start from the assumption that the scripts and artefacts passing through the system are creative works belonging to learners. The judgments, rankings and analytics generated around them are professional outputs belonging to the institutions and professionals who contribute them. Our role is to provide the infrastructure and algorithms that let that interaction happen at scale, not to accumulate a proprietary library of student work that we can quietly repurpose.

In practical terms, institutions remain the owners and controllers of their assessment assets. They decide who can see what, how long different kinds of data are retained, and when and how they want to export it. If they leave, they can take their data – including judgment graphs and results – with them. We see that as a basic part of treating assessment outputs as institutional property rather than vendor collateral.

Most importantly for this series, our default position is that student work in RM Compare is used to run assessment and moderation, not as a standing dataset for training general‑purpose models. Any move beyond assessment operations – for example, to help a trust train its own feedback model or to contribute to a sector‑wide project – would need a separate conversation and explicit agreement with the institutions involved, with a clear explanation of benefits and boundaries. We are not opposed to those possibilities; we simply do not assume we have the right to pursue them unilaterally.

The same logic applies beyond RM Compare. Across RM Assessment, where colleagues are developing AI‑supported marking and analytics, we’re trying to treat scripts and responses as creative works entrusted to us, not as raw material we automatically own. Any use of that material for model training has to start from the same place: students as creators, institutions as custodians, and RM as a service provider that does not help itself to broader rights without a clear, explicit mandate.

Where other patterns may run into trouble

Not every AI‑enabled tool takes this route. A different pattern has already emerged in some parts of the market. In that pattern, vendors reserve the right to reuse anonymised student work for “research”, “benchmarking” or “product improvement”. Student scripts or images of scripts are sent to large external models for automated judging or feedback, under broad contractual language. Over time, those systems come to depend on models which have been trained, in part, on student work drawn from many institutions, without much visibility for learners or teachers.

Today, that may well sit within the letter of privacy and contract law, especially where institutions have clicked through the required agreements. But as copyright policy evolves – and as education‑specific AI guidance matures – these designs may face increasingly sharp questions.

Students and parents may reasonably ask who gave permission for their work to be used in that way, and whether they can opt out. Institutions may want to know whether they have any claim over models that have been shaped by their students’ work and their staff’s judgments. Regulators may push for clearer separation between service use and training use, and for more explicit licensing of any training that does happen.

We do not think a single vendor, including us, should decide those questions on everyone’s behalf. They are sector questions, and in some cases national ones.

A shared problem, not a solved one

It’s worth ending where we began, with humility. We haven’t solved this problem, and neither has anyone else. The law is still moving. School and university policies differ. There are awkward edge cases everywhere, from vocational assessments that touch employer IP through to international programmes that cross legal regimes.

For now, our aim is to hold to a few basic commitments that already feel safe: treat student work as creative property, not free input; treat professional judgments and assessment graphs as institutional assets, not vendor collateral; keep assessment and training uses clearly distinguished; and make any move into training use a deliberate, explicit choice with clear benefit back to the sector.

In the next post, we’ll take on the hard question directly: when, if ever, is it legitimate to train models on student work – and what conditions would need to be in place to make that acceptable to learners, institutions and regulators?