The Human Touch in AI-Driven HR: Finding the Right Balance

StoryTiling Co-Founder and Chief Scientist, Aliaa Remtilla explores the benefits and potential pitfalls of using AI, offering a roadmap for HR professionals to use AI as a complement rather than a substitute for human engagement and decision-making.

ARTICLE

I love AI – it's become an integral part of my daily routine.

From using ChatGPT for brainstorming sessions to using specialist tools like Humantic.ai to personalize my approach to networking, AI has saved me time and improved the quality of my work. I’m such an ‘AI advocate’ that I've even been featured in an article about founders who use AI in innovative ways.

My love of AI has recently gotten me thinking about its use in the world of HR. Can AI supercharge our work? Are there risks to be mindful of? Yes. And yes.

Here’s the summary of my personal opinion on this:

  • AI is only ever going to be as good as the inputs that go into it. And if we rely too heavily on its outputs (without recognizing its limitations), its value reduces.

  • AI needs humans (at least for now). And for our work in ‘Human Resources’ to still feel human-centred and authentic, AI simply won’t be able to completely replace us.

  • But AI can sure help save us time - and improve the quality of our work - if we use it strategically.

Let me explain more about why I feel this way - with a deep dive into AI applications (and watch points) in the field of recruiting.

Revolutionizing Recruitment with AI

Let’s start with ‘the dream’. Imagine a world where hiring is not just efficient but also profoundly fair and personalized. A world where AI tools seamlessly integrate into our HR processes, transforming the way we attract, engage, and retain talent.

Here are some of the ways this could work (there are genuine tools that already do this stuff):

  1. AI in Resume Screening: Imagine a system that goes through resumes in seconds, finding the perfect match for the job. That's what AI-powered Applicant Tracking Systems do, reducing the grunt work and bringing the best candidates forward.

  2. Finding the Best Talent: AI doesn't just wait for candidates to come knocking. It actively searches across platforms, identifying those who fit the bill perfectly, even if they haven't applied.

  3. Always Available Chatbots: Candidates have questions, and AI chatbots are always there to answer them. This 24/7 assistance improves engagement and keeps potential hires informed and interested.

  4. Insightful Video Interviews: Beyond words on a page, AI analyzes video interviews to get a sense of a candidate's personality and suitability for the role, offering insights that resumes can’t.

  5. Predicting Success: Using data, AI predicts who’s likely to excel in a role, ensuring companies make informed hiring decisions.

  6. Promoting Diversity: AI helps remove unconscious bias by focusing on skills and qualifications, making the hiring process fairer for everyone.

  7. Enhanced Onboarding: AI isn’t just for hiring. Platforms like StoryTiling personalize onboarding for new employees, ensuring they feel welcome and become productive members of the team quickly.

This sounds amazing, eh? In this utopia, the recruitment process is not a tedious task mired in paperwork and processes but a dynamic, engaging journey.

The hairy underbelly

Unfortunately, it’s just not so simple. The dream is compelling - but AI has its challenges too. It’s not as easy as it seems to simply integrate AI and hope that all our current gaps can be resolved. The AI challenges that most frequently get raised are:

  1. The risk that it replaces human jobs; and

  2. Questions around data privacy.

These are valid concerns that are practical, and can be mitigated with careful planning. For instance, AI might replace some type of work - like scheduling calls in a calendar - but create others - like managing the data sets that inform AI’s activity.

The more challenging concerns (in my mind) are the ones that are less frequently discussed. One is the risk of losing authentic human connection if too much of the candidate engagement process gets taken over by AI. Chatbots can provide information, but they can't replicate the warmth of human interaction or the nuanced understanding of a skilled HR professional. And many candidates can tell it’s an AI they’re talking to…and they don’t like that! Using AI for candidate engagement and onboarding can help by saving us time and increasing the number of candidates we can engage with - but it risks stripping the hiring process of the personal touches that make a candidate feel genuinely valued.

The other worry I have about AI is that we set it up as ‘objective’ and ‘unbiased’ and have expectations that using it will magically rectify the subjectivity of human opinions. The problem is that AI systems are only as unbiased as the data they’re trained on! If the data used to train the AI is, itself, imbued with human prejudices, this can lead AI to perpetuate the very biases it's meant to eliminate.

The predictive power of AI is a double-edged sword.

Unpredictable Outcomes - the Anthropology bit!

And still, I think it’s worth giving AI a shot. With our eyes open. And with careful reflection throughout the process. I recently read a fascinating ethnography that tells the story of how a company decided to integrate AI. What it revealed is that the very process of setting up the AI tool opened up really basic ethical questions around what people at the company believed were ‘fair’ hiring practices. These were conversations that were never had before! I think this is a fascinating example of the new types of conversations that AI has the potential to open up…so let’s dig into it a bit. Here’s what happened:

Elmira van der Broek is an Assistant Professor at the University of Amsterdam who spent 7 months in 2018/2019 with a large multinational company in Europe that was in the process of implementing AI for the recruitment process of all its graduate trainee programs in Europe. With 100 positions across 4 different programs and 10 different locations, these spots were highly coveted. More than 10,000 candidates applied each year!

With so many different applications to sift through, an AI application was implemented that used data science, neuroscience and machine learning to predict who should be hired - in a way that removed subjectivity and bias from the process. Why try to make the recruitment process more fair? As an HR manager at the company said,

“It is part of a wider strategy to make sure that we have diversity within our company. And not just diversity like gender and nationality, but actually diversity of thought” (Pg 3).

Now, before implementing the AI, it was pretty widely accepted throughout the company that humans had biases that needed to be mitigated. And this task fell to the HR team who facilitated ‘blind’ assessments of resumes, gave trainings to hiring managers on “unconscious-bias” and coached them on how to assess candidates fairly. The need to be ‘fair’ was universally acknowledged, and it was also understood that human bias affected our ability to be truly ‘fair’ - but the notion of ‘fairness’ itself wasn’t under debate.

This changed, as the tool was implemented. The AI team built a hiring algorithm that matched the traits of candidates against the traits of top-performing employees to identify who would be most likely to success if hired. As van der Broek observed the implementation of the AI tool, she witnessed the very notion of ‘what is fair’ come under debate. The HR team witnessed:

  • The ongoing need for human oversight to make the final decision (with the algorithmic recommendations in mind)

  • The challenges with fixed thresholds - should candidates who scored on the borderline still be given a chance?

  • Candidates feeling like they didn’t get a fair chance to prove themselves with the neuroscience ‘games’

  • Even a candidate creating a second account with a new email address in an attempt to “game the system”!

More interesting, were the debates that ensued between managers and the HR team. A sales manager was frustrated that he couldn’t hire his current intern (who had proved to be very successful during the internship) because the intern only scored a 30% on the AI assessment! The sales manager argued:

“Our human assessment should be leading. If the person is super great and nails that but doesn’t pass the AI assessment, how do you explain that?”

Another manager expressed concern that using the algorithm amounted to “cloning people” - since everyone would have similar traits. There was a risk that diversity would actually be reduced - and, for instance, they only hire ‘leaders’.

And toward the end of the study, a presentation of analytic results demonstrated that the top-scoring candidates (those scoring over 92%) were all ultimately rejected by human assessors and did not receive any hiring offers!

A debate emerged, that had never been up for question before: What IS fair? For the HR manager, unfairness seemed to be caused by AI’s invalid assessment of the candidates. But for the data Analytics manager, human assessors were the problem!

Before implementing AI, everyone agreed that fairness was important - and there was consensus that algorithms would be able to remove bias. But the implementation of the tool itself made it clear that there was much more to discuss and negotiate in terms of what real ‘fairness’ might look like.

Why does this all matter? And where to from here?

I don’t think that this European multinational company expected that their adoption of AI would prompt such a deep debate around the notion of fairness - the expectation was that AI would solve this problem! For me, this highlights just how ‘new’ AI remains. We still remain unclear on what might unfold when we do choose to integrate AI in a significant way. It’s exciting! We’re at a new frontier and get to be part of this process of shaping our own future.

After doing all this reading and research into AI in the field of recruiting, here’s my advice for recruiters considering the adoption of AI tools:

  1. Define Your Objectives: Before integrating AI into your recruitment strategy, pinpoint what you're aiming to achieve. Whether it's streamlining resume screening or enhancing candidate engagement, clarity about your goals will guide you in leveraging AI effectively.

  2. Complement, Don't Replace: View AI as an augmentation of your recruitment team, not a replacement. Let AI handle the initial heavy lifting, but ensure that human judgment plays a central role in final decisions. This approach maintains the personal touch essential for successful recruitment.

  3. Monitor Data Quality and Bias: Be proactive in managing the data your AI systems use. Ensure the data is clean and regularly audited for bias, as the outputs of AI are only as good as the inputs. This diligence helps in achieving fair and objective hiring practices.

  4. Personalize the Candidate Experience: Use AI to enhance efficiency but strive to keep the recruitment process personal and engaging. Automated processes should not detract from the candidate's experience of your company's culture and values.

  5. Adapt and Learn: The AI landscape is rapidly evolving. Keep abreast of new developments and be ready to adjust your strategies. Feedback from candidates and hiring managers can offer invaluable insights for refining your AI approach.

In sum, AI presents an opportunity to reimagine recruitment, making it more efficient, fair, and engaging. However, its true value is realized when we use it as a tool to enhance, rather than replace, the human elements of our work. By approaching AI with a thoughtful and balanced perspective, we can harness its power to enrich the recruitment process while preserving the human connections that are the heart of HR.

Previous
Previous

Onboarding & Retention: Enhancing the Employment Experience

Next
Next

Gen Z Myths Debunked: Engaging the TikTok Generation (for more than 8 seconds)