eCommerceNews Australia - Technology news for digital commerce decision-makers
Illustration people waiting job interview robotic arm selecting candidates office

AI hiring tools judged fairer than humans, boosting diversity

Wed, 2nd Jul 2025

Recent research from Warden AI indicates that artificial intelligence systems used in hiring processes outperform human recruiters when assessed for fair decision making, with 85% of audited AI models meeting industry fairness thresholds and notable gains for female and minority candidates.

As businesses continue to increase their adoption of AI and automation in human resources, the report entitled 'State of AI Bias in Talent Acquisition 2025' suggests that AI technology is providing fairer outcomes in the talent acquisition process, challenging ongoing concerns around bias and discrimination in algorithm-driven hiring.

Fairness evaluations

The Warden AI study examined over one million test samples and more than 150 formal AI audits involving more than 100 HR technology vendors. The report specifically analysed so-called "high-risk" AI systems in talent acquisition and intelligence. Findings revealed that AI models were not only fairer but also more consistent across different assessed demographic groups compared with human-led processes. The technology also generally displayed high levels of consistency when demographic-linked attributes such as names were changed, with 95% of audited models meeting the highest standards.

Despite high-profile cases, like Mobley v Workday, bringing legal scrutiny and public concern regarding bias and discrimination in AI-driven hiring, the study provides data that supports a more nuanced perspective. According to Warden AI's research, three-quarters (75%) of HR leaders still rate bias as a leading concern when evaluating AI tools, ranking just behind data privacy.

Comparative outcomes for candidates

The audit process measured the "fairness score"—analogous to an impact ratio—examining how different demographic groups fare during hiring stages such as CV screening. While not all AI models passed, the majority significantly outperformed human-led processes on fairness metrics. On average, AI models achieved a fairness score of 0.94 compared with 0.67 for human recruiters.

In terms of demographic comparison, female candidates reportedly experienced up to 39% fairer treatment under AI-based hiring systems, while racial minority candidates saw up to 45% greater fairness. Despite these positive markers, 15% of AI tools reviewed failed to meet fairness thresholds for all groups, with performance inconsistencies of up to 40% observed between different vendors. This variance highlights the need for careful partner selection and ongoing monitoring when deploying AI solutions in hiring.

HR buyer perspectives

While the research shows positive trends, caution remains among HR leaders making procurement decisions. Only 11% of surveyed HR buyers report disregarding AI-related risks when selecting vendors, and 46% view a clear commitment by vendors to responsible AI as a key factor in the success of implementation. The heightened public attention on legal and reputational risks, driven by cases such as Mobley v Workday, contributes to this vigilance.

Business leaders and the public rightfully are concerned about AI bias and its impacts. But this fear is causing us to lose sight of how flawed human decision-making can be and its potential ramifications for equity and equality in the workplace. As our research shows, AI isn't automatically a better or worse solution for talent acquisition. This is a wake-up call to HR and business leaders: when used responsibly, AI doesn't just avoid introducing bias, it can actually help counter inequalities that have long existed in the workplace.

The statement from Jeffrey Pole, CEO and co-founder of Warden AI and author of the report, reflects a call for balanced evaluation of both technology and human practices, particularly with regards to ongoing efforts to address workplace inequality.

Kyle Lagunas, Founder and Principal at Kyle & Co., reflected on the broader HR sector's evolution:

After a decade advising HR and Talent leaders on how to adopt technology responsibly, I've seen excitement around AI quickly give way to concern, especially around bias and fairness. But now is the time to lean in—and find real answers to the real risks we face. This report brings a number of interesting points together to crystallize this critical conversation. As the findings highlight, while AI bias is real, it is also measurable, manageable, and, thankfully, mitigatable.

The results from Warden AI's study reinforce the ongoing debates regarding the application of AI in recruitment and talent management. The capacity of AI models to meet or exceed accepted fairness criteria in most cases does not eliminate the need for vigilance among HR leaders, who remain dedicated to minimising risk and ensuring equitable outcomes for all candidates.