7.2.3: Analysing Jane’s Peer Assessment Session
Understand how RM Compare supports peer assessment, and what makes this session type unique.
- Learn to extract, visualise, and interpret peer-generated data—including comparing student rankings and detecting consensus or disagreement.
 - Develop strategies for using LLM-powered insights to enhance formative feedback, student agency, and reflection.
 
Introduction & Context
Scenario: Jane runs a peer assessment session, with students as judges. Watch the video below to learn more.
Data Familiarisation and inspection
You can find the data set here to download or copy. This data is taken from a Peer Learning that involves the use of contributing judges. This means there are few addtional data fields, specifically around Judgement Feedback.
We have added some MetaData to the Session Data also.
LLM-Supported Peer Insight Analysis
Prompt: Visualise score distributions for each item/author
    
Prompt: create heat-map based on the sentiment of judge feedback
    
Individual Student Report example
Prompt: Produce an individual report for them. This should provide detailed feedback about their Item, how it performed, what feedback it received and what actions should be taken to improve.
Report for Ashley White
From a Peer Learning (Contributing Judge) session