Privacy by Design: Why RM Compare Doesn't Import Student Rosters

Over the past 18 months, we've watched the EdTech market split into two camps.

One camp asks: "How can we extract maximum value from assessment data? What can we train our AI on? Where's the cross-customer benchmarking opportunity?". In many cases these vendors are optimising for their business model, and your student data is the product.

The other camp - where RM Compare sits - asks a different question: "What data do we actually need to deliver world-class assessment?" And the honest answer is: far less than most vendors collect.

We don't import your student rosters. We don't ingest names, IDs, or demographic data linked to assessment work. We use email addresses only where operationally necessary. This isn't a constraint we're managing around.

It's a deliberate design choice. Here's why.

The Roster Problem

For many assessment vendors, importing your student roster is step one. It seems reasonable: "We'll pull in your MIS, match submissions to students, make it easy for you." But what you're actually doing is exposing your student data to a third party's infrastructure, backups, disaster-recovery copies, and whatever future features that vendor dreams up.

Once your roster is in, the scope creep is inevitable:

  • "Let's add student progress dashboards" (now students are identifiable in our system)
  • "Let's benchmark across customers" (now your students' data is being compared to other schools')
  • "Let's train our AI on this work" (now your students are part of someone else's training set)

None of these happen overnight. They happen as new features get added, new business opportunities emerge, and the vendor's incentives slowly shift away from your privacy and toward their growth.

The ICO has been clear about this. Since 2023, EdTech vendors have faced investigations for using student data to train large language models without explicit consent. Schools have discovered rosters were exported and sold. The regulator's message is blunt: if you're storing student identifiable information, you'd better have a damn good reason, and you'd better own that risk.

Many vendors don't have a good reason. They just have a convenient architecture.

What We Do Instead

RM Compare is built on a different model. We don't need your roster to deliver everything you ask us to do.

Here's what actually happens:

  • Teachers upload anonymised work. No names. No IDs. Maybe a code only they understand. The essay or portfolio is rich and assessable; the identity is absent.
  • Judges are invited by email to compare that work. They judge pairwise ("Which is stronger?") and RM Compare builds a rank order from their judgements. Judges never know whose work they're assessing (except in peer-learning scenarios, where students explicitly judge anonymised peer work alongside their own).
  • Results come back anonymised. You get a reliable rank order, consistency metrics, and feedback. You map it back to your students in your own system, where you control that link.

This works for moderation, marking standardisation, AI validation, formative feedback, creative assessment, and peer learning. The only scenarios where we use student email addresses are those where students are actively participating as judges and even then, the work they judge stays anonymous.

Is this slightly more manual than "hand us your roster and we'll figure it out"? Yes. Does it mean you can't build named student dashboards inside RM Compare? Correct. Does it matter? Not remotely, because you can build those in your LMS or analytics platform, which is where that logic belongs anyway.

What it does mean: if RM Compare is ever breached, we don't hold your student roster.

Why This Matters Right Now

In 2025, privacy-first design has become a market signal.

Schools are asking harder questions about EdTech. Parents and governors are asking harder questions. Regulators are asking harder questions. And procurement teams are realising that vendors who say "we design for maximum data collection" are the ones who also want to monetise that data later.

We're betting that schools are ready to move past the "just give us your data and trust us" model. We're betting that transparency about what you do and don't collect will become a competitive moat. We're betting that in a market full of data-hungry vendors, the vendor that says "we only take what we need" will be the one schools actually trust.

The evidence is there. Schools are increasingly evaluating vendors on data minimisation, not feature maximisation. Procurement teams are asking "What student data do you hold?" before they ask "What can your platform do?" Teachers are getting savvier about vendor practices. Parents expect their children's data to be handled with care.

This isn't a niche concern. It's becoming table stakes. In our opinion, this is a good thing.

The Integration Angle

One more thing: RM Compare's approach makes integrations better, not worse.

If every vendor demands a full roster import, your data proliferates. Your governance burden grows. The risk of misalignment and linking errors multiplies. If the vendor ever gets breached or sold, your student data is exposed across multiple third-party systems.

By design, RM Compare integrates on results and artefacts, not identifiers. You manage the linking in your own system, where you control it. This keeps your data architecture simpler, your compliance burden lighter, and your risk lower.

We're in active conversations with LMS vendors, MIS providers, and analytics platforms about how to build privacy-first connectors that share insights without sharing rosters. Those conversations are easier when you can say: "RM Compare doesn't ingest student rosters by design."

What We Actually Collect

To be absolutely clear: we do collect email addresses. Judge emails, for invitations and notifications. Student emails in peer-learning scenarios, so students can receive feedback on their own work. We log system activity for security. We retain anonymised usage data to improve the platform.

We're not claiming to be privacy-perfect. We're claiming to be purposeful about what data we collect and why.

The difference is: we can explain every byte. We don't have rosters sitting in our systems. We don't train general-purpose AI on your student work. We don't cross-customer benchmark student data. We don't have the ability to, because we don't hold the raw material.

This isn't a limitation. It's a choice.

For Procurement and Compliance Teams

If you're evaluating RM Compare and need to understand our privacy model in detail—what data we do and don't ingest, how contributing judge scenarios work, FAQ on GDPR and FERPA compliance—we've published a comprehensive guide in our help centre:

Our Approach to Privacy covers the operational detail, includes four real-world scenarios, and answers the questions we hear most from schools and awarding bodies.

You'll also want our Data Protection Agreement and can request processor terms from our team.

The Bet

We're betting that in 2025 and beyond, privacy-first design matters. We're betting that schools are tired of vendors collecting data "just in case." We're betting that transparency about what you do and what you don't do with student data will become a competitive advantage, not a limitation.

We're betting that the vendors who respect student privacy will be the ones schools actually choose to work with.

If you're looking for an assessment platform that's transparent, thoughtful, and genuinely aligned with your role as a data controller - one that uses email for what email is for and keeps assessment work where it belongs - let's talk.

Questions? Start with Our Approach to Privacy in our help centre, or reach out directly. We're happy to walk through how this works in your specific context.

Privacy isn't a feature. It's a choice. We've made ours.