GenAI in hiring reaches flashpoint as legal precedent looms
It’s beginning.
The backlash against using AI in hiring is building momentum and gaining a higher profile.
Disgruntled job seekers venting their frustration online is one thing – and I’ll get to that shortly – but a legal case in California is what every stakeholder in this field is paying attention to right now.
Last month, a California judge ruled Workday Inc. (Workday – an “AI platform for HR and finance”) must face a nationwide collective action that could potentially implicate many tens of millions of job seekers who were rejected for employment through an application process run via Workday. This is the first class action to challenge the use of AI in recruitment software and will be instructive for vendors and employers who rely on vendors, such as Workday, or use AI internally to make decisions about job applicants.
The case was brought by Derek Mobley, a Black American, over the age of 40 years. Mobley claims that since 2017 he has applied to more than 100 jobs with companies that use Workday’s screening features for hiring and was rejected for every job.
Mobley initially filed an individual lawsuit in the US District Court for the Northern District of California against Workday, alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA. Mobley’s lawyers asserted in the lawsuit that “Because there are no guardrails to regulate Workday’s conduct, the algorithmic decision-making tools it utilises to screen out applicants provide a ready mechanism for discrimination.”
Mobley was subsequently joined by four other plaintiffs who alleged that they had also been rejected almost every time without an interview, having applied for hundreds of jobs via Workday, allegedly because of their age. In response, Workday filed a motion to dismiss, arguing that Workday was not the employer and did not make the employment decisions. The judge denied Workday’s motion to dismiss and allowed the case to proceed, holding that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley’s application.
In her decision, the judge held that the critical issue at the heart of Mobley’s claim is whether that system has a disparate impact on applicants over forty.
After additional discovery, Workday will have the opportunity to present evidence that the collective is not, in fact, similarly situated and ask the court to de-certify the class and require each plaintiff to proceed individually.
According to law firm Fisher Phillips, the case is based on “disparate impact theory”, which allows claims to continue without proof of direct “intentional discrimination.”
According to Dr. Janice Gassam Asare , writing in Forbes yesterday, “AI bias is pervasive in recruitment and hiring tools. A 2024 study from the University of Washington revealed racial and gender bias in AI tools used to screen resumes.
There can be data bias, which is when AI systems are trained on biased data that can contain an overrepresentation of some groups (white people, for example) and an underrepresention of other groups (non-white people for example). This can manifest into an AI tool that ends up rejecting qualified job candidates because it was trained on biased data.
There is also algorithmic bias, which can include developer coding mistakes, where a developer’s biases become embedded into an algorithm. An example of this is an AI system designed to flag job applicants whose resumes include certain terms meant to signal leadership skills like “debate team,” “captain,” or “president.” These key terms could end up filtering out job candidates from less affluent backgrounds or underrepresented racial groups, whose leadership potential might show up in non-traditional ways.”
Research published two months ago by Dr Natalie Sheard, a University of Melbourne law school researcher, supports Dr Asare’s concerns. Dr Sheard warns the use of AI hiring systems to screen and shortlist candidates risks discriminating against applicants, due to biases introduced by the limited datasets the AI models were trained on.
Datasets based on limited information that often favours American data over international data present a risk of bias in those AI systems, Sheard said. One vendor claims, according to the paper, that its word error rate for transcription of English-language speakers in the US is less than 10%, on average. However, when testing non-native English speakers with accents from other countries, that error rate increases to between 12% and 22%.
“The training data will come from the country where they’re built – a lot of them are built in the US, so they don’t reflect the demographic groups we have in Australia,” Dr Sheard said.
Research participants told Sheard that non-native English speakers or those with a disability affecting their speech found their words being transcribed incorrectly, leading to lower scores from the vendor’s assessment algorithm.
Alleged discrimination due to the use of AI in a hiring process is yet to reach the courts in Australia, however in 2022, in its annual report the federal merit protection commissioner noted 11 promotion decisions in Services Australia had been overturned, after the AI automated selection techniques used by the external vendor, were found to “…not always meet the key objective of selecting the most meritorious candidates”.
Last week yahoo!finance published an article highlighting several job advertisements from major Australian employers, including Bunnings and Woolworths, in which the notification of how AI is being used in the screening process was the subject of online complaints.
Using chatbots and requiring a video recording of AI-prompted questions is not unusual for low or semi-skilled roles; however, the article highlighted a $160k role where a human recruiter only entered the hiring process at the fourth stage – if you made it that far.
Some recruiters don’t seem concerned.
As one senior industry veteran commented to me last week, “Candidates have to get over their recent sense of privilege, Ross. They will gaslight you at drop of hat. AI takes away that option.”
It’s undoubtedly going to be a fascinating couple of years as employers seek to reduce the cost of their internal recruitment teams through AI while recruitment agencies weigh up the potential AI-generated cost savings compared to building deeper relationships with a smaller pool of high quality talent – the type who are much less likely to apply for generic roles with a low-touch recruitment process.
As EVolv co-founder and chief product officer, Raghav Singh wrote this week about the potential legal consequences of the Mobley v Workday case, “To mitigate this risk employers should evaluate the use of hiring tools at all levels. While the reason for using AI is to improve efficiency and have fewer humans involved, the Mobley case suggests a need to have human oversight.
And that’s not just to avoid the risk of litigation. Anthropic CEO Amodei has also said that anyone alarmed by the ignorance that exists about AI tools is “right to be concerned.” if you don’t know what the AI is doing then you have no idea if it’s excluding well qualified candidates.”
Related blogs
Recruiters get ready – the unsettling impact of GenAI on careers is just beginning
The stakes for AI-improved recruitment just got raised (by A LOT)
Would an AI-generated script from your database be an advantage or an embarrassment?
AI clones Australian radio host for 6 months (and nobody notices)
How the heck with Workday know he was over 40?
Surely Americans don’t put their birthday on their resumes!!!
I assume from his total years listed under employment history, but I don’t really know.