- Opinion
Response to the DfE Policy Paper - Generative artificial intelligence (AI) in education (22 Jan 2025)
The UK Department for Education's guidance on generative AI in education, released today, emphasizes the importance of human expertise and judgment in the educational process. Timed to coincide with the first day of BETT - a show dominated by AI driven products and prositions - this is a welcome publication.
This stance aligns well with the principles and capabilities of RM Compare, making it an attractive solution for educators and policymakers in light of this guidance.
Alignment with DfE Guidance
RM Compare's Adaptive Comparative Judgement (ACJ) system complements the DfE's position on generative AI in several key ways:
- Valuing Human Expertise: The DfE guidance stresses that AI cannot replace human judgment and subject knowledge. RM Compare's system is designed to leverage human expertise in assessment, rather than attempting to automate it entirely.
- Holistic Assessment: The guidance emphasizes the importance of acquiring knowledge, expertise, and intellectual capability. RM Compare's "Measure What You Treasure" approach allows for a more comprehensive evaluation of student work, aligning with this holistic view of education.
- Reducing Workload: The DfE recognises the potential of AI to reduce teacher workload. RM Compare's efficient comparative judgment method can streamline assessment processes, freeing up teacher time for more impactful activities
Benefits for Educators and Policymakers
Educators and policymakers can view RM Compare positively for several reasons:
- Innovative Assessment: RM Compare offers a novel approach to assessment that goes beyond traditional methods, potentially addressing some of the limitations of AI-generated content in educational evaluation.
- Curriculum Alignment: The system allows for assessment that can be tailored to specific curriculum goals, addressing the DfE's concern about AI tools not being trained on specific curricula.
- Encouraging Critical Thinking: By involving human judgment in the assessment process, RM Compare supports the development of critical thinking skills, which the DfE guidance identifies as crucial in the age of AI.
- Adaptability to Future Skills: As the guidance notes the importance of preparing students for changing workplaces, RM Compare's flexible assessment approach can be adapted to evaluate emerging skills and competencies.
- Mitigating AI Limitations: The DfE warns about potential inaccuracies and biases in AI-generated content. RM Compare's human-centered approach helps mitigate these risks in assessment.
Conclusion
RM Compare's approach to assessment aligns well with the DfE's guidance on generative AI in education, while offering a distinct advantage. By emphasising human expertise and holistic evaluation through its machine learning-based comparative judgment system, RM Compare provides a compelling solution for educators and policymakers looking to enhance assessment practices. It addresses the challenges and opportunities presented by AI in education without relying on AI-driven automation, thus maintaining the critical role of human judgment in the assessment process.