Mobley vs Workday – an update on the case that is challenging AI assessment

In early 2026, Mobley vs Workday has become one of the most closely watched lawsuits about AI and high‑stakes decision‑making. The case is still very much alive: a federal court has authorised notices inviting thousands of people to join a growing collective challenge to Workday’s AI‑driven hiring tools, with an opt‑in deadline of 7 March 2026. There has been no final ruling on liability, but the lawsuit has already passed several important legal tests, and each step has raised the stakes not just for Workday, but for any organisation using AI to make decisions about people.

For assessment professionals, this is more than an interesting employment law story. It is a live example of the kinds of questions courts are starting to ask about AI‑mediated decisions, questions that will quickly reach exams, coursework and professional licensing.

Who is Derek Mobley and what is this case about?

The plaintiff is Derek Mobley, an IT professional who turned to digital hiring platforms after being laid off from his job. Over nine months he submitted well over 100 applications through Workday‑powered portals and, despite being qualified, received no interviews or offers. When he realised that even the rejection emails were being generated by bots, he began to suspect that automated screening, not human judgement, was keeping him out.

In his lawsuit, Derek Mobley argues that Workday’s algorithmic screening, testing and recommendation tools systematically disadvantaged him and other applicants in protected groups, including older candidates and people with disabilities. The core allegation is that the way these tools are designed and used has a discriminatory impact on certain groups, even if nobody at Workday or its customer organisations set out to discriminate intentionally.

What makes Derek Mobley’s case different from many individual hiring disputes is its focus on the underlying platform. He is not just challenging one employer’s decision; he is challenging the role that Workday’s AI‑enabled tools play across many employers’ recruitment processes. The claim is that when dozens or hundreds of organisations rely on the same automated system, any bias in that system can be scaled across an entire segment of the labour market.

At this stage, those allegations have not been proven or rejected at trial. The reason Mobley vs Workday matters is that a federal court has repeatedly decided that Derek Mobley’s claims are serious and plausible enough to move forward, rather than being dismissed early.

The key tests the case has already passed

Although there is no final judgment, the lawsuit has cleared several important hurdles that explain why it is gaining momentum:

  • Surviving early attempts to shut it down Workday initially asked the court to dismiss the case on various grounds, including arguments about the kinds of entities covered by discrimination law. The court did narrow some of the original theories, but it did not end the case; instead, it allowed Derek Mobley to amend his complaint and continue.
  • Recognising that an AI vendor can be treated as an agent In a pivotal July 2024 order, the court agreed that Workday is not a traditional “employment agency” under older statutory language. However, it allowed discrimination claims to proceed on a different basis: that Workday can plausibly be treated as an agent of the employers who rely on its tools to screen and rank candidates. That matters because it creates precedent that AI service providers may share legal responsibility when their systems are used to make discriminatory decisions.
  • Granting collective status for age‑discrimination claims In May 2025, Judge Rita Lin granted preliminary certification of a nationwide collective covering job applicants aged 40 and over who were denied employment recommendations through Workday’s platform from September 2020 onwards. The court accepted that there was enough of a common policy (employers using Workday’s AI‑driven hiring tools) to justify treating many applicants as part of the same case.
  • Authorising opt‑in and public notice in 2026 In early 2026, the court authorised a formal notice process telling eligible applicants that they can opt in by filing a consent form, with information made available via a dedicated website and law‑firm announcements. Commentators now describe Mobley vs Workday as one of the AI‑related employment cases to watch in 2026, precisely because it could evolve into one of the largest collectives of its kind.

Taken together, these steps show why this is not a minor, technical dispute. It has grown into a live test case about where responsibility lies for AI‑driven decisions and how far anti‑discrimination law reaches into third‑party software platforms.

Why this matters beyond hiring

On the surface, Mobley vs Workday is about employment. But the underlying questions are directly relevant to assessment, credentialing and selection in education:

  • If an AI system plays a major role in deciding outcomes, who is responsible for its impact – the institution, the vendor, or both?
  • How can we demonstrate that AI‑influenced decisions are fair, reliable and non‑discriminatory across different groups?
  • What happens when a model is effectively a black box, and neither students nor staff can clearly explain how a particular mark or decision was reached?

One of the clearest signals from Mobley vs Workday is that courts are willing to treat AI vendors as part of the decision‑making chain, not just as neutral suppliers. That has obvious implications for assessment technology providers who build and sell AI scoring engines, and for the institutions that choose to rely on them for high‑stakes outcomes.

The case also highlights the risk of opacity. The concerns Derek Mobley raises about unseen variables, unknown weightings and historic data encoding old patterns of exclusion are exactly the concerns many educators have about black‑box scoring models. If a grade comes from a model that nobody can meaningfully interrogate, trust will always be fragile.

How RM Compare and ACJ fit into this picture

At RM Compare, we have deliberately taken a different approach to AI in assessment. Our Adaptive Comparative Judgement (ACJ) model keeps humans at the centre while providing a rigorous way to validate automated marking rather than replace it.

In a validation‑layer workflow, AI does the first pass at scale and RM Compare checks its work:

  • Human gold standard: A statistically significant sample of scripts is run through RM Compare, where expert judges compare pairs of work and build a consensus “gold standard” for that sample.
  • Overlay and calibration: AI scores for the same scripts are overlaid on this human benchmark, instantly revealing where the model disagrees with expert consensus, where particular subgroups are mis‑scored, or where the model is drifting over time.
  • Targeted human review: The majority of AI scores that align with the benchmark can flow through; the 10–20% of “misfit” cases are automatically routed for human review and, if needed, used to retrain or adjust the model.

In other words, RM Compare acts as a gold‑standard validation layer that lets organisations enjoy AI’s speed and scale while making sure that high‑stakes decisions remain anchored to defensible human judgement. This is the opposite of the black‑box pattern at issue in Mobley vs Workday: AI is not left to run unsupervised, and vendors and institutions can show regulators exactly how they are monitoring fairness and performance over time.

What to watch next in Mobley vs Workday

Because the case is ongoing, the most important developments still lie ahead. Over the coming months, it will be worth watching:

  • How many additional applicants choose to opt in, and how large the collective becomes.
  • What evidence the court requires about how Workday’s AI tools actually work and how they were monitored for bias.
  • Whether the court ultimately finds that Workday can be held liable as an agent of its customers, and on what reasoning.

Whatever the outcome, Mobley vs Workday has already changed the conversation. It has shown that AI vendors can be drawn directly into discrimination claims and that black‑box automation is a strategic risk, not just a technical shortcut. For assessment leaders, this is a powerful prompt to ask: in our own systems, is AI an unaccountable decision‑maker, or a transparent partner to human judgement?