Get involved!
We keep working with our partners around the world to further validate our initial findings so we can run some more tests as we move toward our first trial launch, join us on this journey!
CJ On Demand is an entirely new approach to Comparative Judgement, built on years of innovation and research. Right now we have enough learnings to piece together the experience by way of a pilot. The learnings here will allow us to move carefully toward a production ready version. You are invited to join us on this journey.
Standard RM Compare sessions employ an adaptivity algorithm to intelligently surface pairs of items for judgement – this is called Adaptive Comparative Judgement (ACJ). A Simplified Pairs session removes the adaptivity and instead takes control of both the pairing process and the judge allocation. The judge experience remains exactly the same as it is for standard RM Compare sessions.
Meet the RM Compare team! An opportunity for us to discuss your goals for the pilot. We’ll ask you to share your initial ideas, including the items you’d like to use for the session/s and give you some guidance for the next steps. We’ll agree on some general timelines and what level of support/involvement you’d like to get from the team.
We’ll also discuss any licensing and payment requirements necessary to continue to the next stage.
Book your initial consultation
At RM Compare we’re committed to building products driven by teacher feedback and insight. As part of our efforts to understand how teachers and teams standardised students’ work at schools (their goals, problems and needs) we've designed a survey using the On-Demand for mobile functionality. With this survey, we are looking to identify where priorities and key pain points are, and validate if these would vary between groups (for example primary and secondary schools). This will help YOU and the RM Compare team understand where we can add the most value to our users.
The Compare team will run some tests to validate the quality of your rank. We’ll use this session to review the test results and decide if we are ready to proceed to next the stage or give you some recommendations to improve it. These might include adding more judges to the pool, making more judgements, or potentially re-running the session with a new (or updated) set of items.
When ready, we’ll help you set up the simulation.
Share and discuss with the team your results from the session and your overall pilot experience. What went well, what we could have done differently, etc. Agree on next steps (pilot 2?).
Jump into RM Compare faster by learning some basics. You can sign up for a FREE trial from our product portal and follow our quick guides to learn how RM Compare works.