AI + Human Instinct: The Perfect Blend

Contributors:


Josh Bersin

Founder, The Josh Bersin Company


Matt Poole

Head of Service Development, AMS


Krishna Sai Charan

Vice President, Everest Group


Rebecca Wetteman

CEO and Principal Analyst, Valoir

Artificial Intelligence has grabbed headlines since it exploded into the public consciousness with the launch of ChatGPT. The reality of course, is that AI as a concept has been in test phase for far longer with many organizations, but recently the conversation has shifted from “AI will do that” to “AI is doing that”.

This shift from test to use has heightened the need for further discussion on the ethical use of AI. Here’s why talent acquisition leaders of all stripes must not forget the important role that humans will play in overseeing the role AI plays in the hiring process.

Talent acquisition leaders are facing a critical decision when it comes to adopting AI.

As ChatGPT approaches the second anniversary of its debut, HR and TA leaders are, eagerly if cautiously, exploring new ways in which their organizations can deploy these groundbreaking machine-learning tools to hire new employees. While business leaders marvel at the power and potential of Gen AI — and financial officers foresee the benefit of leaner staffs on the bottom line — savvy recruiters can already see the hazards of turning over their day-to-day hiring procedures to technology that has its capabilities rooted in the ability to create its responses from thin air.

There’s too much at stake when using AI without realizing the legal and ethical consequences of employing these tools indiscriminately and with little or no human supervision. Despite the promises of greater automation, recruiters must remember that AI and its breathtaking capabilities cannot be a substitute for the emotion, compassion and ethical judgment that are essential qualities for attracting and retaining the workforce of today, let alone tomorrow. After all, being hired or rejected for a new role has a major impact on a candidate’s life. The role of the TA leader has never been more critical and as AI tools improve and are more widely adopted, the TA leader will bear the ethical and practical consequences of how these tools perform.

Talent is where personal connection and empathy are non-negotiable cornerstones and AI, properly and responsibly deployed, can enhance our lives, says Matt Poole, Head of Service Development for AMS.

AI Has Entered the Recruitment Space

Whilst there are fears that AI will eliminate human interaction from the process, AI can actually enhance the candidate experience. For starters, AI gives recruiters a richer array of options when presenting job opportunities to candidates. Currently, the recruitment process is still rooted in, what Poole calls, the “trifecta of annoying processes” — the job ad, CV, and cover letter.

Forward thinking TAs have already turned to AI to scour these source materials for candidates with the desired job titles, in-demand skills, experience and other factors that make a candidate stand out. Recruiters doing this by-hand typically are inundated with applications, leading to the dreaded “CV blackhole” and applications with no response. AI tools that are tested and proven to be bias-free are able to cope with a much larger workload, ensuring candidates aren’t missed. Recruiters are also using AI to craft interview questions, write higher quality job descriptions and adverts, and create multimedia rich outreach collateral for candidates, all of which are typically time consuming tasks for TA teams.

In response to this time burden, AMS’ sourcing managers are combining Large Language Models (LLMs) with talking avatars to bring hyper-personalized content to life. By simply inputting a job title and location, the tool generates data-driven personas and custom messaging tailored to each audience. Talking avatars narrate key traits, enhancing engagement through real-time, relatable storytelling. This AI-driven approach empowers our Talent Acquisition teams to deliver precise, authentic outreach that resonates deeply, says Poole.

AI tools can also help recruiters engage with candidates during what is typically a lengthy and emotionally taxing recruitment journey that requires multiple interviews, and often with few updates. Recruiters often have to manage dozens of candidates as they move through the recruitment process, which can often take weeks and sometimes months, and can result in the candidate dropping out of the interview process to accept a better job offer. In fact, some recruiters don’t even reach out to candidates if they have been rejected but AI can help bring closure to the candidate.

“Once recruiters have decided to put a candidate forward for interviews and an offer, AMS wants to streamline many of the repetitive, manual administrative tasks that sit on the recruiter's desk and frankly bog down their workload,” says Poole.

“The tasks themselves do not have to be complex for AI to have value, but the broad personalization it can bring to even those simple tasks can uplift the whole experience for everybody,” says Poole. “It’s kind of a floor raiser, really.”

Poole and his team are adamant that AI is not a decision-making solution but a tool to aid decision making. “At AMS, our approach is to keep the human in control, in the role of the decision maker,” he says. “Actually, what we want to do is speed up the point at which we can get the decision-maker to the decision. That’s where AI will be most effective.”

As companies have started to explore the power of AI in small-scale AI trials and pilot programs, some recruiters have realized that AI tools have their share of limitations and challenges. They understand that they require human oversight. Many of these drawbacks are not so much design flaws, as fundamental building blocks of the approach that has unlocked generative AI. A feature and not a bug, according to Poole. After all, he says that gen AI tools are designed to be creative, and it is intrinsic to their design that they are unable to be 100 percent accurate and reliable. “It makes picking the right tool for the use case incredibly important” he remarked. Not every problem requires an AI-shaped solution.

As businesses automate more of the process steps in recruiting, two “human centered” parts remain, according to Josh Bersin, founder of the HR technology consultancy The Josh Bersin Company. The first is the scoping and description of the job.

“This is far more than AI-generated job descriptions, a recruiter needs to work with the hiring manager, hiring team, and business leaders to scope the job, understanding the role, understand the culture, and force the hiring manager to consider internal candidates, org design, or other issues before simply posting a requisition,” he says.

“The second is interviewing and getting to know the candidate, which includes understanding their cultural fit, skills, interests, aspirations, and fit,” adds Bersin.


TA leaders also need to understand that they are not the only ones using AI — candidates are using machine learning tools to write cover letters, sharpen resumes and even take tests for job skills. “I attended a client roundtable recently with an oil and gas company and the most important thing to them was not how can we use AI in all of these cool ways. It was how do we know if candidates are using AI to game our interviews or any of our assessment processes?” recalls Poole.

Poole believes that AI adoption will be “meaningful but slowish,” as organizations attempt to get AI right and use these tools for good. “Ultimately in recruitment, we deal with people's livelihood. This is not like counting apples into a warehouse, so we have to treat AI in the right way,” he says

The Ethical Use of AI

At the forefront of a TA leaders' mind is the need to ensure that the AI models used by vendors in their supply chain, are properly scrutinized for bias and fairness and deployed in an ethical manner. And this is a job for humans, says Poole.

“Most of our TA clients at this point are very aware of the EU AI Act, the different US state legislations like New York’s local law 1 44, among others, says Poole.

Given the lack of guidelines, AMS unveiled the ‘AMS AI Ethics Advisory Board’ in 2024. This new body will help guide AMS and its clients on what they need to consider when deploying AI-powered recruitment tools, especially those that need further testing.

Poole says he has encountered clients who are aware of the ethical and legal concerns when using AI but they are confused about the current spate of laws that are being considered on the local and national level. Clients have told Poole that this confusion can be a barrier to AI adoption.

“Typically, there's someone on their team who has an understanding of what the legislation is actually saying and what “ethical AI” actually means in practice, but beyond that many executives and stakeholders need real educating. A lot of clients fail to understand or just haven't quite got to grips with how complex this can be, particularly in global or multi-regional organizations” says Poole. For example, the EU passed a thorough piece of AI regulations although it has yet to take effect. Meanwhile, the UK government has decided to take a “wait and see” approach to see how the market self-regulates, whilst in the United States different state legislation sets the bar at different levels.

“If you are a global enterprise and you're trying to apply a use case to your business, there is significant complexity. We’re already talking to clients regularly about this and have had time to consider the implications; we come with a prebuilt perspective that they haven't had the chance to get to yet,” says Poole, who adds that AMS has partnered with TA solution provider Holistic AI for webinars and knowledge sharing on this topic.

Krishna Sai Charan, a vice president at HR consultancy Everest Group, agrees that companies must oversee their use of AI with a human supervisor.

“We are seeing a myriad of use cases where recruitment professionals are leveraging AI alongside intelligent automation for boosted productivity and a superlative candidate experience,” he says. “However, while it is imperative for organizations to leverage AI as an enabler within the recruitment ecosystem, the human touch will continue to be important for creating meaningful connections that technology alone cannot achieve.”

AMS has been running an AI Accelerator in 2024; an internal, cross functional team charged with bringing AI capabilities into AMS’ tools and service proposition in a responsible, ethical way.

Our Ethics Board set overall parameters and govern where our work takes us, and our steering committee and product development teams work with us to bring new capability to life. Separately, we have an AI risk committee, which quite apart from the utility of the use cases, is simply there to make sure that the way we bring the tool to life is compliant, fair and so on. So, our development of AI tools is very much rooted in robust product development processes that our clients are able to benefit from. They can take advantage of our expertise and work in this area without needing to fund their own investments.

Talent acquisition teams can take advantage of the power of AI without compromising control, quality, and ethics, according to Rebecca Wetteman, CEO and principal analyst of HR consultancy Valoir. She says that while most organizations today are keeping a human in the loop as they learn more about the effective use of AI, in the future talent teams will be able to manage AI agents to automate and scale many parts of the talent acquisition process. For talent acquisition to effectively and safely leverage AI, she says that teams need to understand the potential bias in models and training data so they can effectively correct and take advantage of data masking capabilities that reduce the possibility of bias.

“It's important to note that AI can be an accelerant to TA practices — but that can mean accelerating good or bad practices. Organizations that have fair, transparent, and ethical TA practices to start with and choose technology that can reinforce those practices can leverage the benefits of AI while managing the risks,” she says.

Ultimately, humans and technology need to co-exist in much the same way they always have, which is with a degree of open-mindedness and what Poole calls the “push-pull” that occurs whenever innovation reshapes something in the workplace. In the same way that the Internet transformed how organizations do business, AI will change the “tasks to be done” and therefore reshape the roles we all play in that, he says.

This will require access to expertise to help ambitious organizations reach their goals. “As we move forward,” Poole says, “let’s ensure that our human values guide the future of AI and not the other way around.”

written by Phil Albinus in partnership with the Catalyst Editorial Board

with contribution from:

Josh Bersin

Founder, The Josh Bersin Company

Matt Poole

Head of Service Development, AMS

Krishna Sai Charan

Vice President, Everest Group

Rebecca Wetteman

CEO and Principal Analyst, Valoir