iStock.com/strelss

Organizations using AI in their hiring processes may be inadvertently recruiting more analytical employees — and fewer empathetic and intuitive people — because prospects may be changing their behaviours when they know AI is assessing them, a new study has found.

Specifically, candidates may be presenting their more analytical selves, thinking that’s what will get them past the AI screening process.

The researchers cite data from the World Economic Forum stating more than 90% of employers use automated systems to filter or rank job applications, and 88% already employ some form of AI for initial candidate screening.

Efficiency gains, saving both time and money, are frequently cited as the reasons for using AI in the recruitment process. The researchers say consumer goods giant Unilever uses AI-driven tools from HireVue to assess early career applicants, saving 50,000 hours and more than $1 million.

But there is an old saying that goes: the act of observing something changes it.  

“Our research suggests that knowing that one is assessed by AI…biases candidates, making them believe that they should prioritize their analytical capabilities,” the researchers say in an online blog published by Harvard Business Review.

“As a result, companies may be screening out exactly the candidates they need simply by using AI: that innovative thinker or emotionally intelligent leader you’re looking for might present themselves as a rule-following analyst because they believe that is what the AI wants to see.”

The research was conducted by Jonas Goergen, a PhD candidate at the Institute of Behavioral Science and Technology at the University of St. Gallen, Switzerland, Emanuel de Bellis, an assistant professor of marketing at the University of Lausanne, Switzerland, and Anne-Kathrin Klesse, an associate professor of marketing at the Rotterdam School of Management at Erasmus University in the Netherlands.

The researchers conducted 12 studies of more than 13,000 participants, including simulations of a variety of assessment situations in both the laboratory and the field. They also collaborated with a startup platform offering game-based hiring solutions called Equalture.

“We believe this research has implications for the validity and reliability of AI assessment,” the researchers say in their paper, AI assessment changes human behavior. “If people strategically adjust their behavior in line with their lay beliefs about what AI prioritizes, their true capabilities and/or personalities may not be revealed, which may result in distorted decision-making in consequential situations such as college admissions, hiring, and public service provision….

“Therefore, we recommend that organizations identify and address the AI assessment effect in their own assessment practices to ensure that they receive authentic candidate responses.”

In other news: Jargon and specialist talk: Struggles abound for brokers selling cyber

The research paper proposes three ways to offset the AI ‘observer effect.’

First, the researchers note regulators have called for more transparency around using AI in the recruitment process. They encourage companies to go further and demonstrate ‘radical transparency,’ which means providing additional guidance on what AI systems can and cannot do.

“Do not just disclose AI assessment — be explicit about what it actually evaluates,” the researchers say in the HBR blog. “Clearly communicate that your AI can and does value diverse traits, including creativity, emotional intelligence, and intuitive problem-solving.

“This might include providing examples of successful candidates who demonstrated strong intuitive or creative capabilities.”

Second, conduct regular audits of candidates’ responses to your AI recruitment assessments. Try to see if there are any signs of biases in the types of people applying for positions. Are they showing homogenous or similar traits — for example, heavy on the analytics side?

And finally, hybrid processes that include a human can help compensate for the AI bias, the authors say. Some companies tell prospective employees a human looks at all applications, for example. Others confirm assessments and decisions are made by humans.

The hybrid solution isn’t a cure-all, though, the authors warn.

“One of our studies shows that while this hybrid human assessment does reduce candidates’ tendency to highlight analytical capabilities, it does not eliminate it. To close the gap, you need to train your human hirers to compensate for the AI effect.”

Subscribe to our newsletters

Subscribe

David Gambrill

David has twice served as Canadian Underwriter’s senior editor, both from 2005 to 2012, and again from 2017 to the present.