From Bias to Better Insights: Rethinking Assessments with AI

Assessments have always played a crucial role in how we identify potential, develop talent, and shape future leaders. But with the rapid rise of AI and digital tools, we’re facing both incredible opportunities and real risks. In this episode we’re diving into a subject that sits right at the intersection of innovation and humanity and asking, how do we harness the power of technology – particularly AI – to build fairer, more human-centered approaches to leadership assessment and development?

Joining Lucy to discuss this is a real expert in the field of leadership assessment – Tom Verboven from The Talent Enterprise which is part of the Mercer group. Lucy and Tom discuss some of the big questions around assessment and technology, including: 

  • Where do we currently stand in the adoption and effectiveness of AI and technology in assessments? 
  • How can we effectively eliminate bias and noise in assessments, whether human or AI-driven?
  • What is the candidate or participant experience like when interacting with AI and technological assessments?
  • What are the implications of passively collecting data to generate insights on strengths, areas for development?

Chapters

(00:03) Harnessing AI for Leadership Assessment

(05:52) Navigating AI in Leadership Development

(15:45) Ethical Considerations in AI Assessments

(28:21) The Future of AI in Leadership

Contact Tom: https://www.linkedin.com/in/tom-verboven-b85a691/

00:03 – Lucy Adams (Host)
Welcome to HR Disrupted with me, lucy Adams. Each episode will explore innovative approaches for leaders and HR professionals and challenge the status quo with inspiring but practical people strategies. So if you’re looking for fresh ideas, tips and our take on the latest HR trends, subscribe wherever you get your podcasts. So assessments have always played a crucial role in how we identify potential, how we develop talent, how we shape future leaders. But with the rapid rise of AI and digital tools, we’re facing both some kind of real opportunities but also some real risks. So today we’re diving into a subject that sits right at the heart of that kind of intersection between innovation and humanity, and we’re going to be answering the question how do we harness the power of technology, particularly AI, to build fairer, more human-centered approaches to leadership assessment and development? So, and with me to discuss this today is a real expert in the field of leadership assessment, tom Verboven from the Talent Enterprise, which is part of the Mercer Group. A big welcome to you, tom.

01:31 – Tom Verboven (Guest)
Thank you so much for having me, Lucy.

01:33 – Lucy Adams (Host)
Oh, it’s a real pleasure. Now tell us where you’re dialing in from today.

01:38 – Tom Verboven (Guest)
From Dubai.

01:39 – Lucy Adams (Host)
Oh, so what’s the temperature out there today?

01:42 – Tom Verboven (Guest)
I guess it’s like 42 or something.

01:45 – Lucy Adams (Host)
Yeah, it’s August. I remember being in Dubai in August and I remember just walking from the car park into the air-conditioned office. And in those few seconds. I was just a hot mess, it was just awful. It can be quite painful, can’t it? Anyway, just tell us a little bit about you. I mentioned you work for the talent enterprise, uh, which I think is a consultancy firm, isn’t it, that specializes in leadership.

02:13 – Tom Verboven (Guest)
HR consultancy yeah. Hr consultancy, specializing in I mean mercer and then the talent enterprise is very specific. Core business is is assessments perfect.

02:24 – Lucy Adams (Host)
So how did you get into that?

02:28 – Tom Verboven (Guest)
in general to the assessment business, because I’m in there or like like, for quite a while yeah from.

02:34
I think I just started when they stopped doing pen and paper and personality questionnaires and reasoning tests where you had to score like, and when I started they still had. Like the whole cave in our offices was like full of personality questionnaire papers, yeah. So I just started at the moment that they stopped doing it and start automating uh scoring and reports. But that’s how long I’m in the business. I ended by accident. I always wanted to become a professional basketball player clearly didn’t, didn’t work, uh but then wanted to become a diplomat that also clearly didn’t work. So, by accident, I ended up a diplomat that’s quite kind of specific, isn’t it?

03:19
it is. I always wanted to travel the world, yeah, so on delta, go to consult a seat, tom. That’s also a good opportunity. You can travel. You definitely travel. Yeah, you definitely travel. Um, I wanted to travel the world. Yeah, I also want to go to consult a system. That’s also a good opportunity. You can travel, you definitely travel.

03:27 – Lucy Adams (Host)
Yeah, you definitely travel. Um, I wanted to be a ballerina. Two factors one, I got to nearly six foot by the time. I was kind of 11 and uh, you don’t see many six foot ballerinas. And secondly, I wasn’t good enough, so you should have played basketball six foot Exactly.

03:50
OK, so let’s dive into this issue around assessments and just kind of give us your overview of where we currently stand in the kind of the adoption of AI and technology and assessments, in the kind of the adoption of AI and technology and assessments. Where are we standing there, and then maybe we could perhaps talk about some of the benefits of using AI, but also some of the risks.

04:13 – Tom Verboven (Guest)
Yeah, so where do we stand now? So, just because I mentioned that I come from just not the pen and paper, so I’m not old enough, but what I’ve. So I’ve been there for over 20 years where there was very little innovation. So you go from the automation of the scoring, the reporting, and then you had years that it didn’t change a lot. It was more focused on validity, reliability, robustness of methodologies. Then you’ve seen this like assessment platforms, where you build like one platform.

04:54
So that was not that long ago and not all companies or consultancies have that assessment platform, but typically it was one central database where you send a different exercise, where assessors come in, do the scoring of the storage, you can do dashboarding, et cetera. That’s actually pretty recent, so the innovation went really slow and it’s now like a quantum leap. So it’s like a tipping point almost, where you see that now the focus is rather on ai and technology and less on, uh, the validity, reliability, robustness. And you’ve seen some companies I don’t know if you know the story of higher view who started this, yeah, interviewing with facial expression that got a backlash and reputational issues, et cetera rightfully so, because they went too early. And you see, also from clients. We get a lot of push. They want AI regardless If it’s valid, reliable. They want to show off that they use AI for their management.

06:02
So we really feel that clients are pushing for AI, which is good because it forces us to innovate, but we need to be mindful, of course, that we keep reliable as you said in your introduction, unbiased, etc.

06:19 – Lucy Adams (Host)
Yeah, and we’re going to dive into the bias piece. So what is the I mean I take your point about it’s not about adopting ai for the sake of adopting ai. It’s got to be about improving the experience, whether that be how accessible it is, how cost effective it is, how reliable it is. Um, so, and what’s the experience for me as a person going through this? So you want it to be better, not just be better. More AI. What are the kind of assessments that we’re seeing now, then that you know, how is the experience different to the one that I, you know, done, the battery of leadership assessments you know, which would take two days, and how is it different these days?

07:12 – Tom Verboven (Guest)
So it’s it’s very different and maybe if I have to break it down because I also want to talk AI in a very pragmatic way so if we break down assessment, typically you have different steps right. The first one is you define what are you looking for?

07:32 – Lucy Adams (Host)
What’s the?

07:32 – Tom Verboven (Guest)
ideal leadership profile in a company. How is it aligned with the strategy, the values, the culture, etc. So that’s the ideal leadership profile in a company how is it aligned with the strategy, the values, the culture, etc. So that’s the first step. Where it’s still, you know, talking about good leadership. That that’s. That’s a debate as such. You know, if you ask on the people what is good leadership, you get different answers. Yeah, it’s also with your worldview, et cetera. That’s still classic. You know competency frameworks. You have focus groups. There is not a lot of AI there.

08:06
If you ask AI what good leadership looks like, you do get kind of a nuanced picture. You can even force our middle-aged men like myself better in leadership and you do get kind of a nuanced answer. So that’s the first step knowing what you’re going to assess. Then, based on that, we typically design an assessment matrix or what are the exercises we’re going to do? If we look at strategy, then maybe some reasoning tests, psychometrics. Business, then maybe some reasoning tests, psychometrics, some business case or some role plays, whatever. So there’s a second and then the actual assessment. We can in theory now assess leaders without any human assessor interaction. So basically, my job became obsolete. I don’t see it in practice yet, but that’s what we could do. If you look at interviews, then it can be totally AI.

09:10
You remember this interview where you have like three questions and then you have to talk to a video and it’s recording and I still have an assessor in the back who, yeah, but you didn’t have the opportunity to probe, which was, I think, from a candidate experience, super annoying. Um, I mean, depends who you talk to, probably as well, but you see now conversational ai, where the avatar adapts the questions based on what you were saying. So AI can be automated. Ai algorithm, whatever you feel All the psychometrics, reasoning tests.

09:47
There was already no intervention of an assessor and then everything has to do with simulation, like role play, business cases. We do see pretty impressive AI tools If you have companies like and that’s more, startups like Fizzy Lemon in the Netherlands, etc. They do really role playing with an avatar, where they react depending on what you’re saying, where they react depending what you’re saying. So ideal for assessing people management skills, for example, influencing skills or negotiation skills, and you get direct feedback from the AI tool and they can also rate. So the whole process can be AI’d if you want.

10:38 – Lucy Adams (Host)
So I mean obviously, you know I first kind of came into contact with ai from a an assessment point of view, particularly with interviews would be that it was quite useful to create your short list, or at least your shorter long list. You know, it would sort of do the weeding out of the completely unsuitable people, and I think what I’m hearing you say now is that it’s not only doing your your long list to short list, it’s also providing you with in-depth ratings, assessments, rankings of people and presenting you with the best ideal candidate you could get within the people who’ve applied yeah, that’s the.

11:17 – Tom Verboven (Guest)
that’s, of course, a bit the theory you know.

11:20
And then if you take because an assessment always has also developmental dimension or should have a developmental dimension to it. So so the next step after an assessment should be then coaching, development, whatever, and also there you see strong, I think, especially in the L&D phase, because reliability, validity is a bit less important you see really great initiatives If you look at AI coaching and then again, it’s so, I think, if typically, like, for example, our company, we’re using AI as a conversational to develop your IDP based on an assessment, which is very strong. So I see my gap is strategic thinking and you have a conversation with the AI avatar to like, ok, what can you do? And then they ask, like, what’s your learning style, what is missing exactly? And they probe a bit further to then advise on development tips. So that’s very strong. Also, if you look at coaching, I think AI can scale the whole coaching and more people can benefit from coaching. But of course, the whole emotional part, the whole, the human neurons, the intercultural sensitivity, that’s something you still need coaches for.

12:38
So it should always be, and the same goes for assessment From a candidate experience you do, I mean if you apply for a new role. It’s a two-way dialogue and an assessment is still art and science, so it’s still an art of running the assessments.

12:58 – Lucy Adams (Host)
So I want to talk about bias, um and um and and the risks of that, but let’s carry on, because we started talking about it, about the, the uh, the candidate, or the, the leader, who’s being assessed their experience. I mean, you mentioned um earlier on that you know being asked the same three questions by a so-called robot, you know it didn’t feel great. What’s the experience now? Does it feel like we’re talking to a machine or does it feel like we’re talking? Does it feel like we’re having a conversation with a human being?

13:42 – Tom Verboven (Guest)
it. It definitely evolves towards you. Have a feel that if you’re talking to a human being but of course we’re not there yet, but it feels pretty natural.

13:58 – Lucy Adams (Host)
And are you seeing HR teams and L&D teams replacing people within their organizations with these AI tools yet, or are you seeing it being used to supplement and enhance their capabilities?

14:15 – Tom Verboven (Guest)
Supplement and enhance. I mean, maybe we have a conversation in a year or two years. I will have different answers, so it’s hard to predict as well. I do think we’re going to use less assessors, less coaches, but the professionals still exist yeah, it’s a really interesting one, isn’t it?

14:40 – Lucy Adams (Host)
because I think that as hr professionals, we could get quite um anxious about our roles disappearing. But I don’t see that um with the clients that we work with. It does mean that HR, L&D, leadership development professionals are having to change what they offer and move up that food chain so they’re less administrative, less thinking about logistics All of that kind of stuff is just being done for them. But it does mean I mean, I think, quite excitingly really that actually we become again the human experts and we can let go of the things that have perhaps got in the way of that and we can enjoy and give our expertise as the true people, experts and, as you say, those true nuances that perhaps aren’t there yet with AI.

15:41 – Tom Verboven (Guest)
Yeah, fully agree, we can reinforce.

15:45 – Lucy Adams (Host)
So, thinking about this issue of bias and I’d like to kind of broaden this out, I think a little bit, because there’ll be some people listening to this who are not using AI yet but you know, when we think about assessment, whether that be at points of promotion or hiring, or whether it be for developmental purposes, there is always a risk, isn’t there, that those unconscious biases are present, and of course, there’s even then a risk that we import them into the AI tools that we deploy. How do you feel we can kind of effectively eliminate that bias in assessments, whether it’s human or AI driven?

16:35 – Tom Verboven (Guest)
Yeah. So I think there are two things. Just to take it, there is bias, but there’s also noise. I know you’re familiar, probably, with Kahneman’s book around noise where you have assessors and because it’s to give a Dubai example, it’s too hot, there is no air conditioning, I’m becoming a bit grumpy and I need to do an assessment. It has a big impact, so AI totally eliminates that. So if we look at noise, it’s clear that AI is fairer, is better. Bias is a bit of a different story which is not clear and it’s not about who’s winning, who’s less biased. That’s not. I think that’s not the thing. It’s always an end and we can reinforce, as we discussed. The risk with AI is that the bias is reinforced. As an assessor, we’re also biased. I mean in this business over 20 years. I we’re also biased, I mean in this business, over 20 years.

17:41
I cannot say I’m biased. That’s not true, but at least you and me are differently biased, no. So that kind of eagles each other out, while an AI system will reinforce that bias, and that’s of course. That’s the danger. So to tackle that, you have to train the AI system in the right way. You have to make sure that it’s grounded in a framework that is reliable, valid, test. Keep on testing, because it can become a black box that you don’t even know how to explain certain decisions. So there you have to be very careful and mindful. So you need to have that testing of the system, make sure, if you launch, that you train it in the right way, etc. You have an ethical framework around how you’re going to use AI. So I think that’s important.

18:39 – Lucy Adams (Host)
I remember years ago, before AI was even kind of a thing, we still talked about robots. And I remember going to a lecture and it was the head of computer science for a major university here in the UK and she was talking about how there would come a point when robots which we now call AI, but robots would replace, you know, legal assessments, medical diagnosis, accountancy, judgments and so on. And someone asked the question yeah, question, yeah, but you know what about therapy? You know, if you nobody’s going to want to go and sit in front of a robot to get their therapy, and she put it I think she kind of gave, made something that’s, I mean, stuck with me all these years on. She said it affronts us to think of it, but actually if the experience we’re getting is cheaper, available 24-7, and the quality of that therapy is consistently better, because it’s not just one therapist who might be having a bad day your noise point but actually it’s taking the best of therapy from hundreds of thousands of therapists across the world and providing that therapy consistently at the same level of quality, she said eventually people will get over the fact that it’s a robot because it is cheaper, more available and better quality. And I think that’s what we’re talking about here, isn’t it?

20:23
From the noise perspective, from the bias perspective, I think it’s a really interesting one in that, as an HR professional, we often see the HR team saying oh you know, this is beyond my level of understanding, I don’t get the technology, I don’t, I’m nervous about getting involved with it, and but actually this the earlier we get involved in the design and the development of the products that we want to use, the better, because there is a risk, isn’t there? If there were, if we’re just dealing with the end product, we’re not confident. How has bias been dealt with? Whereas if we’re involved in the early stages of creating our own chat, gpt in this space or whatever, then at least we can have a greater levels of reassurance that it’s multiple biases going in there, rather than just maybe a bunch of tech bros.

21:21 – Tom Verboven (Guest)
Yeah, I think that’s a very good point, just to make people comfortable. So people laugh at me because I’m not tech savvy at all, so I struggle. But that said, I’ll give you an example. We’re developing for a client an AI, automatic reporting. So they take information from the different psychometrics, reasoning, different exercises, but we need to give the input for what the AI is going to give as an output. To end, the candidate and the client.

22:03
And so it remains very. They need the expertise to give that input, and you don’t need to be like a tech person to do that. Of course, I get help from my software developers to do that and they make the magic happen, but they need our input.

22:24 – Lucy Adams (Host)
So we’ve got to get involved at the very early stages, from the ground up.

22:29 – Tom Verboven (Guest)
So you cannot leave it to someone who’s not an expert, and not just the software, I know what you’re saying Tom, I know what you’re saying Tom, I know what you’re saying I don’t know if any software engineers are listening to this podcast, so I’m not worried, probably not. I think we’re okay. I don’t think so. No, I don’t think so, I think we’re okay.

22:50 – Lucy Adams (Host)
You mentioned about ethics a little while back and that’s the kind of final area I wouldn’t mind exploring because this has always been an issue. Around that kind of data protection, we are asking very personal questions. People giving and exposing and making themselves vulnerable with their data, with the insights into their personality traits, their personality traits and for some of the in-depth assessments I’ve done, can be really quite probing and you feel quite exposed when you’re going through it. But you, you know, you see the person who’s doing it, you understand the organization, you understand how your data is going to be used. But there I suppose I would you. How are invisible psychometrics?

23:40
and integrated into existing systems invisible assessments, you know, integrated into our existing system? And what are the implications of kind of passively collecting data to generate insights on strengths or areas of development? You know, is there any kind of ethical concerns there?

24:04 – Tom Verboven (Guest)
Definitely some ethical concerns. Yeah, so what you see now is, again, you can do some digital behavior analytics because now we have a succession planning or a high potential program. It’s announced, it’s communicated. We have a town hall to a high potential program it’s announced, it’s communicated. We have a town hall to prep people, et cetera. So you know, you’re assessed On this invisible psychometrics. It’s based on emails, digital chats, whatever you can get your head around and it already exists. So you have companies I think Humanize and there’s also again in the netherlands, keen corp who are doing and using that for more, for cultural analysis yeah, so that kind of sentiment analysis what’s the?

24:47 – Lucy Adams (Host)
what’s the vibe, how are people feeling, um, and you can see why that would be attractive. You’re not having to do the annual engagement survey necessarily. You just got that, a pulse survey without people even knowing it’s happening.

25:03 – Tom Verboven (Guest)
Yeah, exactly. So if you look at those companies, they advocate for privacy, gdpr, etc. And it’s still on an organizational level. And so it’s not only that. You can.

25:15
Also you can think about using wearables or these aura rings or whoops or whatever where companies are going to start measuring how well you slept, how healthy you are, etc. So as long as it’s used on an organizational level, with people knowing that they are used, I think that’s fine. But there’s a big risk. But even if you’re advocates for confidentiality, privacy etc. I’m not sure if you can trust uh top of the heart laser that they’re going to use it, and I’ve seen over 20 years in business.

26:00
It’s small examples but, for example, the use of 360, where you say, and everyone knows in the business, that it’s better used for development, not for performance. But I’ve seen it in practice that you do communication. Say this is for development. Your reports will not be shared only with you. You do communication, says this is for development. Your reports will not be shared only with you. You do whatever you want. Throw it in the garbage, share it with the public. It’s your report. We’re not going to see it. And then you get pressure from the ceo and see the report of of lucy yeah, yeah and, and I just don’t trust it.

26:36
I just don’t trust it. I just don’t trust that the information is used in an ethical way. Yeah, it’s interesting, isn’t it?

26:45 – Lucy Adams (Host)
And.

26:45
I think, you know there is that. I remember reading some research recently about employees’ views on AI and the ethics around it on AI and the ethics around it, and do they trust their leaders to embrace AI in an ethically responsible way? And actually the trust is not there. We see trust every year in corporate leadership dissipating, getting worse, and so at that point where trust in leadership and corporate leadership and public leadership has never been so low, at that point where we’re saying give your, give your data, or or even not even give your data, we’re gonna, we’re just gonna find out about you, whether you like it or not. Um, that’s, uh, that’s. That’s really quite problematic and I and I my sense is that we haven’t seen the, the, those kind of. We’ve had a few small little scandals or or small issues, but I think it’s a debate that that hasn’t been had yet no.

27:55 – Tom Verboven (Guest)
And then if you see, like how some ceos are bragging about how many layoffs thanks to AI yeah, yeah that doesn’t. That doesn’t give me any trust.

28:04 – Lucy Adams (Host)
No no, exactly, exactly, and so I think we kind of come full circle, don’t we? You know, we’ve got, on the one hand, we’ve got really fantastic opportunities with AI and huge risks. What’s your sense? I mean, I hesitate to ask you to kind of give us your predictions for the next few years, because things are moving so quickly. But what is the way forward? What do you see happening in this space in the coming months or years?

28:37 – Tom Verboven (Guest)
happening in this space, so in the coming months or years. So I hope that it’s an end-to-end story, that the human touch is still there. That’s not the dehumanizing way of assessing people and we work hand in hand with a system that can help us enhance and it makes it stronger, reduce bias, noise, etc. So that’s my hope. My hope is also that there will be ethical frameworks, that it’s all ground with the liability, um, and that’s not miss misused.

29:14
It’s not like the big brother is getting bigger and bigger yeah, yeah, absolutely scary yeah some, yeah, some groups are left out because they don’t fit the the ai algorithm yeah and they don’t get recruited or hired. That that’s the scary part. So we have to be vigilant, vigilant, vigilant, that’s right yeah, and mindful in doing the right thing, but at the same time, it’s an exciting time to be there. So we are, we’re here to shape it as well. So, um, I know for my company we have the right people in place to shape it in the right direction so well, let’s end.

29:57 – Lucy Adams (Host)
let’s end with that more optimistic note. Rather than the big brother, let’s think about the opportunities. Tom, thank you for joining me today on HR Disrupted. It’s been really, really interesting and great food for thought, thank you.

30:11 – Tom Verboven (Guest)
Thank you so much.

Listen now
or watch on YouTube
YouTube player

Get Insights

Fresh thinking from Disruptive HR – straight to your inbox

You might also be interested in

Sign up to our newsletter