Our Approach to Privacy

Why you can trust us

RM Compare Questions Ticker

At RM Compare, privacy isn't a feature we've bolted onโ€”it's built into how we operate. We're committed to transparency about what data we process, how we use it, and what we don't do with it. This page explains our approach in detail.

Our Privacy Principles

We believe assessment data is sensitive and belongs to you. That's why:

Data Minimisation: What We Do and Don't Ingest

What Data Does RM Compare NOT Ingest?

RM Compare deliberately does not import:

What Data Does RM Compare Ingest?

RM Compare processes only what's necessary:

For all assessment scenarios:

For contributing judge scenarios only (peer assessment, "learning by evaluating"):

Important: Even in contributing judge scenarios, when students judge peer work, that peer work remains anonymised - they don't know whose work they're assessing, only their own.

How This Works in Practice

Scenario 1: Teacher-Led Moderation

A history teacher uploads 25 student essays for moderation using her own internal labelling scheme ("Q2-A," "Q2-B," etc.). She invites three colleagues via email to judge. They compare the essays and RM Compare returns a rank order and consistency feedback. The teacher then maps results back to named students using her own key, work that happens outside RM Compare.

Data we hold: Judge email addresses, anonymised essays, judgements
Data we don't hold: Student names, IDs, or who wrote what

Scenario 2: Multi-School Moderation

Three schools in a MAT upload anonymised GCSE work samples ("School A - Sample 1," etc.). Judges from all three schools compare across the samples. RM Compare identifies alignment issues. Each school maps results back to their students.

Data we hold: Judge email addresses, anonymised work, judgements
Data we don't hold: Student rosters or identity-to-work links

Scenario 3: AI Validation

An awarding body tests an AI marking system against human assessors. They upload anonymised exam papers and AI-generated scores. Expert assessors (invited via email) compare AI scores to human benchmarks. RM Compare shows where they diverge, helping validate the system.

Data we hold: Assessor email addresses, anonymised papers, AI scores, judgements
Data we don't hold: Student names, IDs, or links to individual candidates

Scenario 4: Peer Assessment / Contributing Judge Workflows

A lecturer runs a "learning by evaluating" session. Thirty students submit essays and judge five peer essays each. Students receive email invitations, upload their own essay (linked to their email), judge five anonymised peer essays, and receive personalised feedback on their submission.

Data we hold: Student email addresses, student submissions linked to email, anonymised peer judgements
Data we don't hold: Student names, IDs, or which student wrote which peer essay that others judged
Key difference: In this scenario, we know which student submitted which essay, but peer work remains anonymous during judging.

Why This Matters

Compliance is Straightforward

Operational Simplicity

Data Ownership and Portability

How We Protect Your Data

Security and Compliance

In-Product Privacy Controls

RM Compare gives you granular control over data sharing:

Your Responsibilities

As data controller, you're responsible for:

We'll support you with templates, guidance, and our privacy statement to help you meet these obligations.

Our Commitment

RM Compare commits to:

Continuous Improvement

We review our privacy practices regularly to ensure they reflect evolving regulations, best practices, and customer needs. If you have feedback or questions about our approach please contact us

Related resources:

Related Articles

Related Questions