A report by Bloomberg this month is casting fresh doubts on generative artificial intelligence’s ability to improve the recruitment outcomes for human resource departments.
In addition to generating job postings and scanning resumés, the most popular AI technologies used in HR are systematically putting racial minorities at a disadvantage in the job application process, the report found.
In an experiment, Bloomberg assigned fictitious but “demographically-distinct” names to equally-qualified resumés and asked OpenAI’s ChatGPT 3.5 to rank those resumés against a job opening for a financial analyst at a real Fortune 500 company. Names distinct to Black Americans were the least likely to be ranked as the top candidate for a financial analyst role, while names associated with Asian women and white men typically fared better.
This is the sort of bias that human recruiters have long struggled with. Now, companies that adopted the technology to streamline recruitment are grappling with how to avoid making the same mistakes, only at a faster speed.
With tight HR budgets, persistent labour shortage and a broader talent pool to choose from (thanks to remote work), fashion companies are increasingly turning to ChatGPT-like tech to scan thousands of resumés in seconds and perform other tasks. A January study by the Society of Human Resources Professionals found that nearly one in four organisations already use AI to support their HR activities and nearly half of HR professionals have made AI implementation a bigger priority in the past year alone.
As more evidence emerges demonstrating the extent to which these technologies amplify the very biases they’re meant to overcome, companies must be prepared to answer serious questions about how they will mitigate these concerns, said Aniela Unguresan, an AI expert and founder of Edge Certified Foundation, a Switzerland-based organisation that offers Diversity, Equity and Inclusion certifications.
“AI is biassed because our minds are biassed,” she said.
Overcoming AI Bias
Many companies are incorporating human oversight as a safeguard against biassed outcomes from AI. They’re also screening the inputs given to AI to try to stop the problem before it starts. That erases some of the advantage the technology offers in the first place: if the goal is to streamline tasks, having human minders examine every outcome, at least partially, defeats the purpose.
How AI is used in an organisation is almost always an extension of the company’s broader philosophy, Unguresan said.
In other words, if a company is deeply invested in issues of diversity, equity and inclusion, sustainability and labour rights, they are more likely to take the steps to de-bias their AI tools. This will include feeding the machines broad sets of data and inputting examples of non conventional candidates in certain roles (for example, a Black woman as a chief executive or a white man as a retail associate). If fashion firms can train their AI in this way, it can have significant benefits for helping the industry get past decades-long inequities in its hierarchy, Unguresan said.
But it’s not foolproof. Google’s Gemini stands as a recent cautionary tale of AI’s potential to over-correct biases or misinterpret prompts aimed at reducing biases. Google suspended the AI image generator in February after it produced unexpected results, including Black Vikings and Asian Nazis, despite requests for historically accurate images.
Unguresan is among the AI experts who advise companies to adopt a more modern “skills-based recruitment” approach, where tools scan resumés for a wide range of attributes, placing less emphasis on where or how skills were acquired. Traditional methods have often excluded candidates who lack specific experiences (such as a college education or past positions at a certain type of retailer), perpetuating cycles of exclusion.
Other options include removing names and addresses from resumés to ward-off preconceived notions humans and the machines they employ bring to the process, noted Damian Chiam, partner at fashion-focused talent agency, Burō Talent.
Most experts (in HR and AI) seem to agree that AI is rarely a suitable one to one replacement for human talent — but knowing where and how to employ human intervention can be challenging.
Dweet, a London-based fashion jobs marketplace, s employs artificial intelligence to craft postings for its clients like Skims, Puig, and Valentino, and to generate applicant shortlists from its pool of over 55,000 candidate profiles. However, the platform also maintains a team of human “talent managers” who oversee and guide feedback from both AI and Dweet’s human clients (brands and candidates) to address any limitations of the technology, Eli Duane, Dweet’s co-founder, said. Although Dweet’s AI doesn’t omit candidates’ names or education levels, its algorithms are trained on matching talent with jobs based solely on work experience, availability, location, and interests, he said.
Missing the Human Touch – or Not
Biasses aside, Burō’s clients, including several European luxury brands, haven’t expressed much interest in using AI to automate recruitment, said Janou Pakter, partner at Burō Talent.
“The issue is this is a creative thing,” Pakter said. “AI cannot capture, understand or document anything that’s special or magical – like the brilliance, intelligence and curiosity in a candidate’s portfolio or resumé.”
AI also can’t address the biases that can emerge long after it’s filtered down the resumé stack. The final decision ultimately rests with a human hiring manager – who may or may not share AI’s enthusiasm for equity.
“It reminds me of the times a client would ask us for a diverse slate of candidates and we would go through the process of curating that, only to have the person in the decision-making role not be willing to embrace that diversity,” Chiam said. “Human managers and the AI need to be aligned for the technology to yield the best results.”