AI has become a trusted partner in recruitment. It scans thousands of resumes, identifies promising candidates, and helps companies make quicker decisions. In a competitive market where firms like Google and McKinsey receive millions of applications each year, AI seems like the only scalable option.
But what happens when that trust breaks down?
A recent lawsuit against an HR software firm has reignited debate over the role of AI in hiring. The plaintiff, a qualified Black man over 40, claims he was repeatedly screened out by algorithmic systems, not because of a lack of skills, but because the software allegedly inferred his age, race, and mental health status from his resume.
It is one case. But it points to a broader concern: can AI be trusted to make fair hiring decisions?
On paper, AI looks impartial. It processes every application without fatigue, distraction or mood. That’s part of why nearly 88% of companies now use AI at some stage of recruitment. But efficiency does not equal fairness.
AI learns from data, and data is shaped by human decisions. If those decisions carry bias, algorithms may replicate (and even amplify) it. For example, if a company’s past hires skewed toward one demographic, an AI trained on that data may favour similar profiles. Bias, in other words, doesn’t just persist. It scales. A biased recruiter might overlook dozens of qualified candidates. A biased algorithm can filter out thousands before a human ever sees them.
These risks aren’t hypothetical. In one instance, altering a birthdate on a resume changed the hiring outcome. In another, AI favored hobbies like “baseball” over “softball,” mirroring past gendered hiring trends. These examples show how seemingly neutral data points can reflect and reinforce bias.
Bias in hiring already costs U.S. companies an estimated $64 billion annually. When AI adds complexity and opacity to the process, it introduces new legal and reputational risks. Regulators and courts are increasingly scrutinizing whether AI systems are discriminatory, by design or through negligence.
So who is accountable when AI gets it wrong?
Not the technology alone. Responsibility lies with the people who design, deploy, and oversee these systems – developers, vendors, HR leaders, and executives. Bias can enter through flawed training data, careless model design, or a rush to implement under-tested tools. Without active scrutiny, even well-intentioned systems can cause harm.
Yet this doesn’t mean AI should be abandoned. When built and deployed thoughtfully, AI can improve hiring outcomes. Research shows that candidates who perform well in AI-led interviews often go on to succeed in human interviews, too. AI can broaden the reach of recruitment, surfacing strong candidates who might otherwise be overlooked.
The key is to view AI as an aid and not a replacement for human judgment. This means embedding ethical standards into every stage of development and use. Companies should regularly audit AI systems, ensure transparency with applicants, and maintain human oversight in final decisions.
Leadership plays a vital role here. Asking the right questions like “Who trained the model?” “What data was used?” “How is fairness measured?” can make the difference between a system that amplifies bias and one that supports equity.
At its best, AI can help organisations build more diverse, dynamic teams. But that potential is only realized when the technology is applied with care, clarity, and accountability.
The next time your company uses AI to hire, don’t just ask if it was fast. Ask if it was fair.
Serein helps teams navigate the risks and possibilities of using AI at work. Reach out to explore how we can support yours.