In this episode of The Proskauer Brief we are joined by partner Guy Brenner, who leads Proskauer’s D.C. Labor & Employment practice and is co-head of the Counseling, Training & Pay Equity Group, and Jonathan Slowik, senior counsel, Labor & Employment, in the firm’s Los Angeles office. In part one of our insightful artificial intelligence series, we explore what employers need to know about using AI when it comes to employment decisions, such as hiring and promotions. Tune in as we break down key considerations and best practices for navigating the evolving landscape of AI in the workplace and provide essential tips that can enhance your approach to talent management.
Transcript
Guy Brenner: Welcome to The Proskauer Brief: Hot Topics in Labor and Employment Law. I’m Guy Brenner, a partner in Proskauer’s Employment Litigation & Counseling group, based in Washington, D.C. and I’m joined by my colleague, Jonathan Slowik, senior counsel, Labor & Employment, in the firm’s Los Angeles office. This is part one of a multi-part series detailing what employer need to know about the use of artificial intelligence (or AI) when it comes to employment decisions, such as hiring and promotions. Jonathan, thank you for joining me today.
Jonathan Slowik: It’s great to be here, Guy.
Guy Brenner: Over the course of the next few episodes, we’re going to be taking a deep dive into three potential pitfalls of using AI when it comes to hiring and promotions, which we’ll get into in a little bit. But before we go any further, I think it’d be helpful for us to set a baseline by first defining what we mean by and looking at a few of the many practical use cases for employers and HR professionals that AI provides. So, Jonathan, why don’t you start by just giving us a definition of AI?
Jonathan Slowik: Certainly, Guy. AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. AI encompasses a wide range of technologies and applications that enable machines to perform tasks that typically require human intelligence. Exhibit A—do you know where I got that definition of AI?
Guy Brenner: I’m going to take a wild guess, and I’m going to say it was ChatGPT.
Jonathan Slowik: Your wild guess is spot on.
Guy Brenner: But it wouldn’t be an AI podcast without some dramatic reveal that the thing we just told you was actually written by a robot.
Jonathan Slowik: It certainly wouldn’t. If you prefer a definition that was written by a human, there’s a pretty good one in the National Artificial Intelligence Initiative Act of 2020, which defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
Guy Brenner: All right. So in my mind, AI is some kind of computer program or tool that can do kind of amazing things that seem or are close to thinking, right. And provide us the ability to do things that I previously didn’t think were possible, or the things of science fiction. And I think what we’ve discovered is that at least, for example, ChatGPT with each successive version is becoming more and more powerful.
Jonathan Slowik: Sure.
Guy Brenner: All right. So now that we’ve defined AI, let’s explore some of the use cases for the technology in the hiring and HR context. Jonathan, why don’t you provide some examples.
Jonathan Slowik: Yeah. So one problem we hear a lot is that hiring managers can be overwhelmed with the volume of applications they receive. Technology has reduced much of the friction of applying for a job. It’s common now for applicants to be able to apply for jobs online, and some platforms liked LinkedIn and Indeed make it possible for applicants to submit résumés at scale. This Reddit comment sums up what a lot of hiring managers are probably feeling these days: “As a hiring manager I cannot agree more about the insane volume of unqualified and or ineligible applicants that apply for jobs. It has made the hiring process so much more difficult as some people just seem to hit the LinkedIn apply button for anything that sounds interesting.” Guy, what’s the use case for AI in this circumstance?
Guy Brenner: And I think a lot of our listeners can relate to that challenge. So there are a number of companies that offer AI résumé scanners that can prioritize applications for the employer, or even screen some applications out entirely. Some of these products even claim to remove bias from the process by having a dispassionate AI-bot doing the screening instead of a human being with all of their inherent biases. Here’s a claim from the website of one such developer: “AI screening goes beyond keywords and identifies applicants based on aptitude. This feature effectively removes conscious and subconscious human bias from the evaluation process. Human judgment can be influenced by identifying information like gender, location, age, marital or parental status, education, career status, disability, and even resume photos. By sticking to a fixed set of criteria, AI increases the accuracy of applicant selection during the screening process.”
Another way hiring entities are using AI is having chatbots engage with potential candidates before applying. The theory behind this use case is that these chatbots can curate a more informed applicant pool, and can even nudge candidates without the proper qualifications away from applying or to more suitable open positions. Of course, that’s just the first step in hiring. what are other ways AI is being used by employers in hiring?
Jonathan Slowik: Well, even if an employer can cull down the applicant pool, interviewing candidates is extremely time-consuming and expensive. As a result, an employer may be able to interview only a relatively small pool of candidates. So, some employers are turning to AI to conduct the initial interview. People laugh when I say that, but this technology already exists. The currently available versions of this are still a bit primitive and tend to resemble chatbots. For example, the interview may take place in a chat window and consist of a hybrid of text and video, where the AI interviewer asks a question in text and then asks the interviewee to record a video responding to the question. The AI interviewer can then analyze that response, not only for its substance, but for how the candidate presents him or herself by evaluating things like facial expressions and speech patterns.
This is just a personal prediction, but I think this technology is going to get much better in the short term. We’re on the verge of having chatbots that can converse in audio while simultaneously taking in visual information, more or less in real time. OpenAI put out an interesting demo of this on YouTube, showing an AI doing a quick mock interview with a candidate. At the end, the AI gives feedback, which includes among other things, “Try to avoid touching your face so much. It can be distracting and might give off signs of nervousness.”
OK, how about this problem, Guy: applicants lie about their qualifications. How can AI help?
Guy Brenner: Well, sure. There are AI solutions that can attempt to verify résumés by cross-referencing social media. Do applicants hold themselves out to the general public the same way they described themselves to you? Or have they embellished or overfit their résumés to fit your job opening? The video interviewing software you just discussed can perform a similar function. The virtual interviewer can ask targeted questions about the candidate’s experience and analyze the candidate’s voice and facial expressions to evaluate the veracity of their response.
How about this one? Interviews and résumés may provide only limited insight into how a candidate will actually perform in the job. Can AI help employers better predict a candidate’s ability to perform the job?
Jonathan Slowik: Some developers certainly think so. Well, there are plenty AI-evaluated tests on the market that can provide job fit scores on things like cognitive skills, aptitudes, and even personality or “culture fit.” These tests really run the gamut, and include things like games, traditional cognitive tests, and job skills simulations. For example, the website of one provider features a prompt on its website meant to test the empathy skills of an applicant for a customer support specialist position. Here’s the prompt: “You are on a call when the customer becomes distressed or upset. Record an audio of what you would say to the customer to manage the situation and calm them down.” The AI presumably scores the candidate’s response based not only on the words the candidate used, but whether they were delivered empathetically and in a way likely to diffuse the situation. Guy, let’s do one more use case. How can AI address the problem that employee evaluations are time-consuming, and may suffer from recency bias, favoritism, and other subjective biases?
Guy Brenner: This one is pretty straightforward. There are a number of products that offer the option to have AI prepare the first draft of performance reviews by drawing from objective and subjective sources. This clearly may save time, but some developers also claim it can be more accurate. One developer’s website says their tool “provides more objective and data-driven insights” which “remove biases and enhance the accuracy and fairness of performance evaluations.” So this is obviously an area where there is a lot of innovation, and we’ve attempted to walk through these examples in a way that’s cleareyed about the genuine business case for this technology. But as one might imagine, these tools need to be implemented with care.
Jonathan Slowik: That’s right, Guy, not least of which because we have existing antidiscrimination laws on the books, and it’s no defense to say “the AI did it.” A number of federal agencies (including the EEOC and Department of Labor) issued a joint statement in April of this year affirming their commitment to enforcing these laws “regardless of whether legal violations occur through traditional means or advanced technologies.” And they identified three types of problems that can occur with the use of AI in particular: first, issues with training data; second, issues relating to model opacity (also called “black box” issues); and third, mismatches between platform design and use. Each of these categories is worth a deeper dive, because without a clear understanding of the features and limitations of these AI systems, it’s possible for well-intentioned employers to inadvertently run afoul of these laws. So over the next three episodes in this series, we’re going to be taking a closer look at each of these categories.
Guy Brenner: Well, thanks, Jonathan. I’m looking forward to talking to you about each of those categories and learning more about what employers need to know about AI and to those listening, thank you for joining us on The Proskauer Brief today. As developments warrant we will be recording new podcasts to help you stay on top of this fascinating and ever changing area of the law and technology.