Is RM Compare an Assessment Software Application or Infrastructure?

When people first encounter RM Compare, they usually see it as what our homepage says it is: the world‑leading Adaptive Comparative Judgement system. That description is accurate, but as customers start integrating RM Compare more deeply, a different question often emerges: is this just another application we use, or is it actually part of our assessment infrastructure? In this post I want to explore that distinction and show why thinking of RM Compare as “assessment infrastructure delivered as SaaS” can help some customers get more value from it.

Where RM Compare sits in your stack

In cloud terms, RM Compare is clearly Software as a Service. You access it over the web, we host and manage the platform, and your teams concentrate on configuring assessments and using the results, rather than worrying about servers or databases. However, once it is deployed inside an organisation, RM Compare rarely exists in isolation. It connects to candidate or learner‑facing portals, content and item banks, management information systems, and reporting tools. Over time, what began as “the place we run ACJ” often becomes a shared service that multiple teams and programmes rely on. At that point it is functioning less like a standalone tool and more like a central assessment service in your architecture.

Why it is still firmly SaaS

You might have seen terms like IaaS, PaaS and SaaS used to categorise cloud services. In simple terms, infrastructure services provide raw compute and storage, platforms provide a managed environment for your own code, and software services deliver complete applications you consume over the internet. RM Compare belongs firmly in that last category. You do not provision virtual machines, choose an operating system, or deploy the core application yourself. Instead, you configure assessments, manage judges, integrate via defined interfaces, and interpret the outcomes. That is the essence of SaaS: we own and operate the technical stack; you consume a domain‑specific capability.

How RM Compare behaves like infrastructure

For more advanced customers, the way RM Compare is used starts to feel very close to infrastructure. A single instance may underpin multiple subjects, qualifications or programmes, acting as a common engine for standard setting and portfolio assessment. In some cases, assessors and candidates never see RM Compare as a separate destination at all; it operates behind the scenes, invoked by your own systems and surfaced under your own branding. Over time, it becomes a long‑lived backbone component: new initiatives plug into it, rather than standing up fresh, isolated solutions. Thinking of RM Compare in this way – as an assessment backbone rather than a one‑off tool – changes the kinds of decisions you make about integration, reuse and scale.

What we own and what you own

Seeing RM Compare as assessment infrastructure delivered as SaaS also clarifies who is responsible for what. We take care of the comparative judgement engine, the workflows and interfaces needed to run sessions, and the underlying hosting, performance and security. You stay in control of how candidates and teachers access assessment, how your qualifications and tasks are designed, and how RM Compare feeds into your data flows and downstream decisions. You do not need to build and maintain your own ACJ engine on top of raw cloud infrastructure. Instead, you plug into a service that already exists and is proven at scale, and you focus your energy on pedagogy, candidate experience and the high‑stakes uses of the resulting data.

Why this framing matters

Most day‑to‑day users do not need to think about service models or architecture diagrams. For them, RM Compare is simply the environment in which they compare work and see results. But if you are responsible for your organisation’s architecture or long‑term assessment strategy, it helps to have a clear mental model. RM Compare is delivered as a SaaS product, but inside your ecosystem it can function as shared assessment infrastructure. Treating it that way encourages you to standardise on a single ACJ backbone, reuse integrations, and extend into new use cases more easily. That is how many of our most successful customers have been able to grow from initial pilots to large‑scale, multi‑programme use of comparative judgement.