Mobley v. Workday: A Wake-Up Call for Collective Action on Responsible AI in HR

The Mobley v. Workday lawsuit has rapidly become one of the most closely followed legal battles in the HR tech world—and rightfully so.

Innovation cycles are running rampant, AI is increasingly embedded in recruitment workflows, and job-seekers—frustrated by poorly designed candidate experiences and a lack of transparency in how employers are (and aren’t) using AI—are jumping on the anti-AI bandwagon.

This lawsuit has thrust the industry into a moment of reflection. Vendors, employers, and HR leaders alike are wrestling with core questions about accountability, transparency, and the trustworthiness of our tech tools.

The case hits directly at a foundational question we’ve all been grappling with for years: How accountable should tech vendors be for the outcomes driven by their algorithms?

The recent Wall Street Journal coverage underscores these issues, but I found myself frustrated reading through it… Because, honestly, the analysis risks oversimplifying AI bias and confusing human-set parameters with algorithmic decisions.

We all know headline hysterics are a real thing in an age of confirmation bias and limited attention spans. My concern is that reactions without real understanding could lead employers to prematurely abandoning beneficial AI tools when they should be thoughtfully evaluating their processes.

And I’m not alone. Last month, I had the privilege of moderating a panel featuring top HR and technology experts. Our mission was clear: to unpack the core legal and ethical dimensions of the Mobley v. Workday case and provide practical guidance for HR and tech teams navigating these uncertain waters.

Here’s who joined us:

Honestly, we nailed it (seriously, you should check out the full panel recording here). We spent an hour breaking down exactly what the case involves, what it doesn’t, and highlighting the key issues leaders need to watch.

I thought that, in light of the Wall Street Journal’s article, the conversation was worth revisiting.

At the highest level, we all agreed on one thing: This isn’t merely a lawsuit—it’s an inflection point for our industry.

Mobley v. Workday signals a shift where “responsible AI” moves beyond marketing buzzwords and becomes a vital competency for any credible vendor in our space. Jung put it plainly: “This case stands out because it challenges the assumption that technology vendors aren’t liable for how their customers use AI,” she said. “That alone makes it significant.”

To navigate what’s ahead effectively, HR and tech leaders need to deeply understand the broader implications of Mobley v. Workday. Grasping this will be key to safely harnessing AI’s enormous potential.

But to fully appreciate these implications, it’s important to understand exactly why this case is groundbreaking and recognize the significant unanswered questions at this early stage.

Reality Check: We Have More Questions than Answers Right Now

One thing that quickly emerged from our discussion was just how many unanswered questions there are at this point. Despite the extensive media attention and widespread speculation, there’s no publicly available definitive proof of bias yet. In fact, Workday strongly asserts that they conduct rigorous testing and have never seen any indication of bias in their tools.

Jeff clarified this in our conversation. “People assume evidence of bias already exists,” he acknowledged, “but we’re not there yet. Right now, we have far more questions than answers.”

The case is now entering a critical discovery phase, where the data and evidence will finally come into sharper focus.

For now, though, HR and technology leaders should prioritize getting a clear understanding of AI’s nuanced role in hiring decisions and prepare thoughtfully for various potential outcomes.

That said, there are some immediate things we can learn—and even some things we can act on based on what we know so far. I’ve put together my takeaways based on our discussion and my analysis of the write-ups I’ve seen so far. Here’s a TLDR:

  1. AI isn’t inherently risky—but how we implement and oversee it matters immensely.
  2. AI literacy is now a critical skill for HR leaders. Understanding technology’s role in decision-making processes is essential.
  3. Regular and rigorous testing of AI tools, including independent auditing – by both the vendor and the customer – will be crucial for trust and compliance going forward.
  4. Transparency and clear communication from vendors about AI capabilities and limitations will become increasingly important.
  5. HR leaders should proactively audit and reevaluate candidate journeys to spot potential biases and reduce risks. This is a shared responsibility between HR leaders and technology vendors.

AI Isn’t Inherently High- or Low-Risk. Collaborative Governance is the Differentiator.

Given the sensational headlines and proliferation of armchair legal analysts coming out of the woodwork, a natural reaction might be to slam the brakes on the use of AI across your HR and talent functions. But our panel firmly agreed that such a reaction would be shortsighted.

“It would be a poor decision to freeze all AI adoption because of this case,” Jung said. “Instead, we should adopt low-risk AI tools thoughtfully and with proper oversight.”

Jeff reinforced this idea, noting that the core issue isn’t the AI itself, but rather how we manage, deploy, and oversee these tools.

“The takeaway isn’t to stop using AI,” said Jeff, “it’s to put better guardrails in place so we can leverage its benefits responsibly.”

AI isn’t going anywhere—quite the opposite, frankly. Navigating this wave of transformation responsibly and confidently requires us to clearly differentiate between low-risk and high-risk applications.

For example: Tools that enhance productivity, like scheduling assistants or interview note summaries, are generally low-risk.

On the other hand, tools that generate insights or orchestrate processes to support more efficient and data-driven decision-making still have error rates—and they need meticulous evaluation and consistent oversight to ensure fairness, effectiveness, and compliance.

It’s important to recognize that many vendors, including Workday, already apply stringent testing and assessments for high-risk AI use cases during product development. HR leaders should take a similar approach, rigorously evaluating and testing their AI tools to ensure they’re deploying these technologies responsibly, effectively, and ethically within their own organizations.

Ultimately, the responsible use of AI hinges on strong governance frameworks and a clear understanding of how AI integrates with human decision-making.

As Jung stated, “Risk management is not a legal decision, it’s really an organizational decision. Collaboration around legal, product, and HR is critical. Product can’t decide by themselves what their tool is going to look like, and legal can’t advise product without really understanding the underlying algorithms.”

Sarah drove this point home further, laying out practical steps for managing potential risks:

“You need to do this collaboratively. You need to work very closely with your legal counsel. Ideally, you’re getting executive sponsorship to investigate potential risk, as opposed to just shutting it down.”

Independent Auditing Will Be Mandatory, Not Optional

One outcome from Mobley v. Workday that I can confidently predict: consistent, standardized, and transparent AI auditing will quickly become an industry norm.

While third-party audits are critical, broader adoption of widely agreed-upon testing standards and greater transparency around testing practices—already embraced by some leading vendors—will also become essential.

During our panel, Jeff shared an important insight on this topic: “Less than 5% of third-party audits cover characteristics beyond race and gender, like age. This case highlights why that needs to change—and fast.”

Given the speed of AI innovation and the stakes involved, rigorous and regular audits won’t just be best practices—they’ll be mandatory. And these assessments need to comprehensively cover a wider set of protected characteristics and be conducted with clear, transparent methodologies.

To put it bluntly: Vendors should expect more rigorous scrutiny, and HR leaders need to be prepared to ask tougher questions. Jung underscored this clearly:

“You should demand third-party audits from vendors and ensure these results are transparent and easy to understand.”

Transparency in testing methodologies and audit results is more than good governance—it’s a foundational step toward building trust and reassuring stakeholders across the organization.

Vendors Show Their Work. Transparency and Clarity Will Define Vendor Relationships.

Building on the topic of audits, transparency itself is poised to become a defining characteristic of successful partnerships between HR and its solution providers. Historically, some vendors have treated transparency more as a marketing message than a fully embedded operational best practice.

However, leading vendors have increasingly shown a genuine commitment to transparency and explainability, sharing methodologies and findings directly with their customers.

My lovely panelists agreed, underscoring the importance of extending this practice throughout the industry.

Jung put it perfectly when she said transparency needs to be more than just a marketing talking point. “Transparency means using plain English, not engineering jargon, to help customers truly understand how AI impacts their processes.”

Sarah reinforced this perspective, highlighting the practical side of transparency. “True transparency from vendors means giving HR teams clear, practical insights—not just technical details—so we can confidently communicate how AI impacts hiring decisions across our organizations.”

This shift toward plain-language communication isn’t just a courtesy—it’s a mission-critical commitment. HR leaders don’t just need technical documentation; they need clarity. They must be able to communicate risks, benefits, and compliance efforts internally themselves, including the legal department, operations partners, and the C-suite.

If vendors truly want to earn trust, both transparency and clarity need to become ingrained in every interaction, every audit, and every conversation about AI tools. Ultimately, straightforward communication isn’t just smart business—it’s going to be table stakes in a world increasingly defined by complexity and accountability.

Hidden Risks Aren’t Risks You Can Afford. Reevaluating Candidate Journeys is a Must-Do Now.

With all this in mind, perhaps the most immediate takeaway for HR leaders is clear: it’s time to proactively reevaluate your candidate journeys.

The Mobley v. Workday case underscores the critical importance of understanding every step of your hiring process—including the role technology plays in filtering or recommending candidates.

At the center of this case is the question of whether Workday’s technology is making decisions about which candidates advance, and which ones don’t—rather than employers setting screening criteria, which is a widely accepted best practice.

But it’s an important reminder that even if the final hiring decisions remain human-driven, any reliance on technology at key stages demands careful scrutiny.

Sarah provided practical guidance during our panel, emphasizing proactive action: “A practical first step is to audit your candidate journey to identify precisely where and how technology influences candidate progression.”

You can’t afford to wait until a lawsuit or compliance issue emerges to examine your processes. The time to ensure fairness, transparency, and accountability across your entire hiring journey—regardless of who or what makes the final decision—is now.

There is some confusion—and more than a little angst—among jobseekers today regarding how decisions are made, especially when they are rejected within minutes of applying.

Candidate experiences often include questions in the apply process that are used to automatically screen candidates. It’s not AI at all, but can (and does, as Mobley’s case demonstrates) undermine candidate trust in the fairness of an employer’s process.

“Understand exactly where and how automated decisions occur to identify any hidden risk,” recommended Sarah.

In this new era of heightened accountability and transparency, those who act proactively—rather than reactively—will set themselves apart. The time to get ahead of this issue is now.

It’s Not Us Versus Them. HR and Vendors Can Raise the Bar for Ethical AI Use Together.

At the end of the day, the Mobley v. Workday case serves as a loud and necessary wake-up call. It signals that the bar for responsible AI use in HR isn’t just rising—it’s fundamentally shifting.

Our industry needs to move quickly from treating responsible AI as a lofty ideal to embracing it as an absolute requirement.

Jeff captured this shift succinctly, clarifying that, “The implications from this case aren’t about stopping AI; they’re about raising the bar and setting new, higher standards for how we deploy AI responsibly.”

For HR leaders, this means stepping up to actively shape AI adoption strategies. It requires that we move past a passive, vendor-driven approach to a more proactive, informed, and deliberate strategy—one where we’re asking tough questions, demanding clearer answers, and setting higher expectations for transparency and accountability.

For technology vendors, this case is a stark reminder that responsible AI isn’t just a competitive advantage anymore; it’s a baseline requirement.

Those who recognize and rise to this new reality will set themselves apart—and those who ignore it will quickly fall behind.

As I see it, this is the moment for HR leaders and tech providers alike to lean in and lead. The path forward isn’t just about avoiding risk—it’s about embracing a stronger, more transparent, and ultimately more impactful approach to AI in HR.

We’ll be continuing the conversation—on this case, and on other major trends—all summer long. Join us at Kyle & Co for a front row seat: kyleandco.com.

Leave a Comment

Your email address will not be published. Required fields are marked *