[ad_1]
The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).
AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.
The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.
But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology.
Our ongoing research starts from a particular demographic: young job seekers from different backgrounds. We chose this demographic because they are likely to experience AVIs as one of the first steps in their initial assessments as they search for their first job. Our initial interviews were conducted with 20 job candidates of different races and ethnicities, largely from Great Britain, and with a mix of first-generation university graduates. We asked them to reflect on their behavior, thoughts, and feelings at the time they were shortlisted for an interview, during the interview, and post-interview. We analyzed their understanding of the hiring technologies, and of the interview process as explained by hiring platforms they were instructed to use. We also studied hundreds of pages of documents, websites, and reports used by platforms to share information about the interview process, the data extracted from candidates, and how it was used.
This research, albeit in its initial stage, reveals some interesting findings and already offers a plethora of case studies — some illustrated below — that we have used to inform recommendations to employers and hiring platforms. In particular, our research uncovered four keyways young job candidates experience AVIs.
AVIs Are Hard to Understand
First and foremost, we found that that job candidates were confused about the type of interview they are asked to undertake, and more specifically, the type of AVI involved. Further, they often did not know how they were going to be assessed by the AVI — some thought there was face recognition involved when there was none in that particular case, for example. This lack of understanding reflects widespread concern that jobseekers are misinformed, so much so that some governments are starting to issue legal advice to employers.
A good example of this confusion came from Alex (we use pseudonyms throughout), a white British student from a middle class background who had convinced himself that algorithm played a far more limited role than in reality:
“I think that [Company Name] does have algorithms. So, you will be ranked on some things like tone of voice and things like that. But I feel like because that’s got quite a lot of backlash, a lot of companies now are saying that they’re all going to be viewed by a person and the algorithms won’t kind of make a difference to your outcomes. So, yeah, I imagine most of the time someone will watch your interview.”
To help address this, we built a spectrum that illustrates the different types of interviews that candidates might undertake. We hope that this categorization might help young job seekers, employers, and platforms develop a common language when speaking about the type of job interview that will take place, so candidates understand the role of the AI in the interview process is. We call it a “depersonalization spectrum” because as the role of AI becomes more prominent, personal connections fades away.
Feelings of Humanity are Diminished
Because many job-seekers did not understand the technology that was being used, their default was to perform in a rigid way — holding a fixed gaze, a fake smile, or unnatural posture; speaking with a monotonous voice; and holding their hands still. They felt they had to behave like robots.
For example, Mo a 21-year-old first-generation university graduate from an ethnic minority group, was confused on how to look professional and started behaving rigidly:
“In order to adapt to the AI, I made sure that my hands were as still as possible. … I maintained deliberate eye contact with the with the camera of my laptop and I spoke with a rather monotone tone so that the AI can pick up on what I’m saying, because the AI searches for keywords that the company is looking for interpreting to the algorithm.”
We also noticed that the stronger the role AI played in the interview, the more candidates felt depersonalized and talked about their own fixed, rigid, or robotic behaviors.
As a result, and often because of a lack of understanding of what it was required of them to pass the interview, candidates felt like their humanity was diminished.
AI Technology Is Glorified
Interestingly, this diminished humanity wasn’t always perceived as a negative thing by individuals in our study; some people noted that the technology was a more effective and efficient way of screening than a human interaction. Many also expressed the belief that the objectivity embedded in AI technology is superior to subjective human decision-making. As a result, job-seekers saw their changed behavior as an inevitable part of the recruitment experience.
We refer to this as a glorification of AI; candidates were OK in behaving rigidly and wanted to “please” the technology.
One of the key factors behind this glorification was the messages they received from the hiring platforms that mediated the relationship between the employer and the candidates. Consistently, the three hiring platforms we analyzed elevated the selective, unbiased power of their technology with descriptions like, and overplayed the validity and reliability of the results:
“Game-changing technology designed to combat interview bias and ensure fairer, more effective and efficient hiring decisions.”
“You can remain confident that every new hire represents the best possible decision, regardless of age, ethnicity, gender, or sexual orientation. In turn, this leads to a more diverse and inclusive workplace, and a more engaged, innovative and successful business.”
As a result, candidates believed that if they were not selected, it really was because of their own personal characteristics — and not because the technology might be underdeveloped, biased, or capturing data inaccurately. In fact, in most of the interviews, the candidate did not know what kind of data were captured at all.
Further, we observed that, while hiring platforms spent a lot of energy persuading employers with (we believe, unrealistic) descriptions of the reliability and validity of the interviews’ technology, they did little to provide candidates with an accurate expectation of what data the technology was going to use, and the limits of the data analysis. As a result, the candidates found themselves in front of a “black box” when it came to understanding how they were assessed.
AVIs Are Emotionally and Cognitively Exhausting
In the end, the emotions that resulted from candidates behaving in the way they believed was necessary for the AI — despite the fact that it was highly unnatural — were energy-depleting both emotionally and cognitively. This was a widespread experience in our sample and it is best illustrated by Eryn, a young woman of color from an underprivileged family told us:
“Oh my god, the amount of hours I’ve spent on interviews at this stage, to still not have a graduate role is so frustrating. And also when you know that they haven’t even seen it … All you want to do is get a job, and you’re putting it all this effort and so much work, to then just be rejected before a person’s even spoken to you. It’s just really demoralizing.”
This resulted in some candidates giving up applying to jobs that required to pass an automated job interview. For example, Elliott, a 21-year-old first-generation graduate from a lower income background, said:
“Since I don’t perform well in the video interviews … I don’t really care that much. I do not want to invest loads in them ‘cos I know I’m probably gonna cock it up anyway. … But it’s making me think I want to work for a smaller company, just because these big companies obviously have video interviews because they have so many applicants, and they’re not, probably not, that personal anyway.”
How HR Managers and Platforms Can Improve the AVI Experience
To improve the interviewees experience we endorse the use of a “glass box approach” (as opposed to the current “black box”) for HR interview platforms and managers, as we know that how technology is explained has an impact on how it is experienced.
Both groups need to ensure that candidates using the technology understand how AVIs function from the outset. In addition to using our depersonalization spectrum to get on the same page, we suggest that they adopt rigorous standards in explaining the technology. The explanations to candidates about what happens before, during and after the interviews should be:
Understandable.
Candidates and employers need to understand the AVIs function in the interview: How it works, what data are collected, how data are used, and by whom.
Interpretable.
Explanations need to be provided in clear, unambiguous terms, and need to be fully representative of the process that underpins AVI.
Transparent.
The role of AI within the AVI needs to be specified — whether it is AI passive, AI-assisted or AI-led. A candidate should also know if AI is making the final decision or if a human is going to work with the data.
HR managers and hiring platforms should also work together to develop greater guidelines for candidates, including the following:
Appropriate, prompt feedback.
Employers (and hiring platforms) should offer structured and constructive feedback to job candidates, giving them a better understanding of the strengths they can hone on going forward. Although research participants recognized that personal development was seen as a key aspect of the interview process, speaking to a camera may not allow someone to retain the same developmental cues and feedback. By getting feedback, many job candidates could see the time spent with video interviews as an investment, given that this process would offer them a way to enhance their personal development.
Privacy and informed data consent.
Hiring platforms inevitably deal with personal data. Employers and platforms should request consent from users to collect and keep their data, and to inform candidates about the ways in which their data will be used. They should also review and clarify the legal framework for recording candidates during job interviews and ensure it keeps pace with public expectations and legal frameworks.
Support systems for candidates.
Employers could help candidates develop a better understanding of their hiring platforms and how to use them effectively by developing a series of support tools and other information resources for candidates. They can, for example, show videos of successful candidates, and not just actors, or offer facilitated workshops on how to look professional on camera where candidates can ask questions. In other words, show care for the talent pool and sensitivity towards the needs of candidates from different backgrounds and diverse groups.
. . .
Broadly, our early research suggests that young job seekers’ experience of AI-based job interviews is poor. All candidates experienced some level of confusion. The perceptions that they are “processed” through technology at a crucial stage of their working lives affects young job seekers from less privileged backgrounds in particular, who might have an accent, use less formal expressions or tone of voice, or even be less confident in how to look professional in front of a camera.
We also see a lack of attention by the platforms to their ultimate users (the young job candidates themselves), as the platforms seem to pay greater attention to their relationship with the employers, who are their source of revenue. We urge developers and others to be clearer with young job seekers about how their product works, including a more honest description of the technology as not-yet perfect and still in its development phase. This can’t just be done in the small print.
Ultimately, we advocate for employers and platforms to put in place support systems for young job candidates by offering constructive feedback and opportunities for self-development.
[ad_2]
Source link