Robert Mobley, a man over 40 living in the United States of America. He experiences anxiety and depression, and earned a finance degree from Morehouse College. Between 2017 and 2024, he applied for over 100 jobs at companies using Workday’s AI-driven hiring tools but faced rejection each time. He alleges that these AI systems rely on unlawful biases, resulting in discriminatory practices.
Consequently, he filed a lawsuit against Workday. The case addresses Workday’s AI-powered applicant screening tools and its supposed discrimination against multiple social identities including disability.
What did the court say?
The court agreed that an AI vendor could be held directly responsible for employment discrimination under Title VII, the ADAAA, and the ADEA. The Equal Employment Opportunities Commission (EEOC) also acknowledged that this burden of liability in situations of hiring discrimination should be held by AI vendors. This landmark judgement paves the way for use of AI in recruitment, making it absolutely essential to introduce necessary checks that preserve neutrality and non-discrimination.
The onus is on employers to prioritise a thorough review of their AI-powered hiring tools and to clearly distinguish the role these play in their hiring decisions. To avoid future liabilities, employers must be ready to demonstrate that their use of these AI hiring tools does not have a discriminatory effect on marginalised social identities such as disability.
Is AI leading to exclusion?
An estimated 70% of companies and 99% of Fortune 500 companies use AI in their hiring processes. Employers may find the use of AI in recruitment and hiring decisions highly appealing, as it can save costs and reduce the numerous hours that human resources teams previously spent on manual processes.
But AI has not yet evolved to circumvent biases. AI algorithms might flag applicants who don’t match an “ideal” profile. As a result, candidates with employment gaps, atypical career paths, or perceived physical or cognitive disabilities may be filtered out. For applicants with physical disabilities or speech impairments, these tools can be especially problematic, as they may misinterpret their communication styles which could lead to more discrimination and rejection.
What can your organisation do to avoid such discriminatory practices?
As AI technology becomes an integral part of recruitment processes, stakeholders must commit to building fairer, more transparent, and inclusive systems. Here are some steps your organisation can take to prevent such practices.
- Bias audits: AI systems should undergo regular audits to identify and mitigate discriminatory patterns to ensure they perform equally across all demographic groups. This includes adjusting systems to recognise all kinds of visible and invisible disabilities.
- Transparent processes: Employers and AI vendors must provide greater transparency on how decisions are made. Candidates should be informed of the criteria used and reasons for rejection.
- Human oversight: AI should augment, not replace, human decision-making. Combining AI’s efficiency with human judgement can help add context and reduce the likelihood of exclusion.This would be especially vital to provide necessary accommodations during the hiring process to individuals with disabilities.
- Training: Employers should train recruiters, hiring managers, and other professionals on unconscious bias and fair hiring practices focusing on inclusion of persons with disabilities.
As we navigate this evolving landscape, it is crucial to remember that technology should serve as a tool to uplift, not exclude, fostering workplaces that truly reflect the values of diversity and inclusion. Implementing an Equal Employment Opportunity (EEO) Policy can be a crucial first step in promoting fairness and preventing discrimination in hiring.
To know more about establishing bias-free hiring practices, write to us at hello@serein.in.