Research: Harnessing Peer Evaluation in Design Thinking - A Dive into Adaptive Comparative Judgement in HE

In the realm of design education, where creativity and innovation are paramount, assessing student work poses unique challenges. Traditional grading methods often struggle to capture the essence of students' creative output. Enter Adaptive Comparative Judgement (ACJ), a dynamic assessment method that promises a more nuanced approach. A recent study by Bartholomew, Mentzer, and Lynch, published in Frontiers in Education, sheds light on the validity of ACJ for peer evaluation in a design thinking course

Want to learn more? Listen to the deep dive.

The Study: Exploring ACJ in Design Thinking

The study focuses on the application of ACJ within a design thinking course, a domain where problem-solving and innovation are key. Unlike conventional assessment methods that rely on fixed criteria, ACJ involves students evaluating their peers' work by comparing pairs of projects and deciding which one better meets the course objectives. This process not only engages students more deeply with the material but also generates a collective ranking of projects based on the group's judgements.

Findings: Validating ACJ's Effectiveness

The findings from Bartholomew and colleagues' research are promising. They indicate that ACJ is not just a reliable way to assess student work in design thinking courses but also enhances the learning experience. By involving students in the assessment process, ACJ encourages critical thinking and a deeper understanding of the course content. Moreover, it empowers students by giving them a say in the evaluation process, fostering a more reflective and self-regulated learning environment.

Implications for Educators

For educators exploring innovative assessment methods that align with the collaborative and iterative nature of design thinking, ACJ offers a compelling solution. It provides a way to assess student projects that is both rigorous and adaptable, accommodating the diverse range of creative work students undertake. The study's findings suggest that ACJ can not only assess but also enhance the educational journey, making it a valuable tool for educators seeking to foster creativity and innovation in their students.

Conclusion: The Future of Assessment in Design Education

As we continue to push the boundaries of education, especially in creative disciplines like design thinking, ACJ stands out as a method that enriches both assessment and learning. Its ability to engage students in the evaluation process and to provide a nuanced understanding of their work makes ACJ a promising avenue for educators. The study by Bartholomew, Mentzer, and Lynch underscores the potential of ACJ to transform assessment in design education, paving the way for a more engaging and reflective approach to learning.Stay tuned to our blog at RM Compare for more insights into innovative assessment methods and the latest trends in educational technology.

FAQ

1. What is Adaptive Comparative Judgement (ACJ)?

ACJ is a method of evaluating work by comparing two items at a time and deciding which is better. This process repeats until a reliable ranking of all items is achieved. Unlike rubrics, ACJ doesn't use predetermined criteria; instead, judges make holistic judgments based on their understanding of quality.

2. How was ACJ used in this design thinking course?

Students in the course formed groups and wrote POV (Point of View) statements for a design project. They then used ACJ software to evaluate each other's POV statements, choosing the better statement in each pairing presented. This process resulted in a ranked order of all POV statements, with each statement receiving a parameter value reflecting its relative quality.

3. What is construct validity, and did peer-reviewed ACJ demonstrate it?

Construct validity refers to whether an assessment accurately measures what it intends to measure. In this study, researchers analyzed the content of high and low-ranking POV statements to see if the ACJ parameter values aligned with established criteria for good POV statements. The results showed that higher-ranking statements generally exhibited better quality in terms of structure, user definition, needs identification, and insights, thus demonstrating construct validity.

4. What is criterion validity, and did peer-reviewed ACJ demonstrate it?

Criterion validity examines how well an assessment correlates with other established measures of the same concept. This study explored two aspects of criterion validity:

  • Concurrent Validity: Examines correlation with an assessment at the same time.
  • Predictive Validity: Examines correlation with a future assessment.

5. Did the study find concurrent validity for the peer-reviewed ACJ?

The study did not find concurrent validity. The parameter values from the peer-reviewed ACJ did not significantly correlate with the instructors' rubric-based grades for the same POV statements. This could be due to inherent differences between norm-referenced (ACJ) and criterion-referenced (rubric) assessments.

6. Did the study find predictive validity for the peer-reviewed ACJ?

Yes, the study found moderate predictive validity. The parameter values from the peer-reviewed ACJ of the POV statements significantly predicted the students' final project grades. This suggests that students who demonstrated a better understanding of quality POV statements, as measured by ACJ, also performed better on their final projects.

7. What are the potential benefits of using ACJ in design thinking education?

ACJ offers several advantages in design thinking:

  • Provides a reliable and efficient way to assess open-ended tasks like POV statements.
  • Allows for early identification of students or groups struggling with design thinking concepts.
  • Offers a formative learning experience for students as they engage in peer evaluation and see examples of varying quality.

8. What are the limitations of this study?

  • The ACJ ranking is relative to the overall quality of submitted work; even a well-written POV statement might receive a low ranking if all submissions are excellent.
  • The study was conducted in a specific design thinking course; further research is needed to generalize findings to other contexts.

Reference

Bartholomew, S. R., Mentzer, N., & Lee, W. (2021). Examining the Validity of Adaptive Comparative Judgment for Peer Evaluation in a Design Thinking Course. Frontiers in Education, 6. [Frontiers].