Jackson Lewis LLP

11/14/2024 | News release | Distributed by Public on 11/14/2024 07:30

We get AI for work: An Exclusive Interview with Keith Sonderling, Former EEOC Commissioner

Details

November 14, 2024

AI is not only the future of technology, but also business and very few organizations are not actively discussing whether and how to strategically implement AI strategies and tools. As AI technology advances at an extraordinary pace, staying ahead of the curve is essential for maintaining competitiveness and innovation.

Transcript

INTRO

AI is not only the future of technology, but also business and very few organizations are not actively discussing whether and how to strategically implement AI strategies and tools. As AI technology advances at an extraordinary pace, staying ahead of the curve is essential for maintaining competitiveness and innovation.

On this episode of We get AI for work, we talk to our special guest, former EEOC Commissioner Keith Sonderling. Keith shares his perspectives on the benefits and perils of using AI in the workplace, state and federal AI regulations, the importance of testing for bias, and why employers should conduct a validation study before using an AI tool.

Today's hosts are Eric Felsberg, principal in Jackson Lewis's Long Island office, and Joe Lazzarotti, principal in the firm's Tampa office, and also co-leaders of the firm's AI Group.

Eric, Joe, and Keith, as organizations make decisions on integrating AI technology into their business, the question on everyone's mind today is: What are some essential best practices employers need to know before embarking on an AI initiative as part of their employment process, and how does that impact my organization?

CONTENT

Joseph J. Lazzarotti
Principal and Privacy, Data and Cybersecurity Co-Leader

Welcome everyone to another episode of We Get AI, where we try to bring our listeners insightful commentary on all things AI, with a focus on the workplace. I'm proud to be here with my partner Eric Felsberg.

We're honored this morning to have with us Keith Sonderling, who, as you probably know, served the EEOC from 2020 to 2024. He also was Acting Deputy Administrator for the Wage and Hour Division at the DOL. He is a very popular speaker and commentator on AI. We really are looking forward to his thoughts this morning.

Good morning, Keith. How are you?

Keith Sonderling (00:49.708)
Former EEOC Commissioner (2020 - 2024); DOL Wage and Hour Division (2017 - 2019)

Good, thank you for having me.

Lazzarotti

You bet, you bet. We wanted to pick your brain on some issues and are looking forward to hearing your thoughts. Maybe a good place to start is: You've been looking at this from a lot of different perspectives -EEOC and maybe the Wage and Hour Division. Can you give us a sense of what you are seeing as the benefits and perils of AI in the HR environment for organizations?

Sonderling

Yes. The reason I spent my time on this is, this is the most important issue facing human resources departments moving forward. It's how are you going to integrate AI technology into your HR functions? And we're past the point now, in almost 2025. It's no longer a question: Are you going to use AI in HR? It's how are you going to use it? What purpose are you going to use it for? And how are you going to get those benefits?

To start out on a positive note, the benefits of using AI in HR are really unlimited from a perspective of making employment decisions, not only more efficient, not only more economically, but from removing bias, which has plagued employment decision-making - which is the reason the EEOC was created and the reason the EEOC exists.

I've said constantly when looking at this, if the AI is properly designed and carefully used by organizations, it can absolutely remove human bias from employment decision-making by removing all factors that employers are not allowed to make a decision on, such as your race, sex, national origin - all these things humans have unlawfully been taking into account - and only looking and designing the algorithms to look at the skills of the candidate or employee to make those decisions on only lawful purposes. So, there's tremendous benefits in all the different uses of potentially allowing the algorithms to neutrally look at the candidate and not look at their name, what their religion is, all these things human can see, whether in an interview or resume.

At the same time, if you just flip what I said, if the algorithms aren't properly designed and they're not able to discount protected characteristics and have those play a factor in them, or it's not properly used by the company who has purchased these programs, the risk here is that you could scale discrimination far greater than any individual human.

That's the challenges when you talk about the promises. It's really the chance to remove longstanding biases. And the perils of this is that one mistake in the algorithm where the user can just scale these decisions that used to take a long time by humans.

Eric J. Felsberg
Principal and Artificial Intelligence Co-Leader

Yes, Keith, I think that's right. On a related note, a lot of employers rely on third-party vendors to provide the AI platform that they're using. And we've seen a lot of discussion around the shift of liability to vendors when they're deploying AI. From an HR perspective, who's responsible if the AI makes a discriminatory employment decision? How does the liability work with the vendor's tool and who's responsible for that in your view?

Sonderling

Here's the very tough answer to that question, which is different than liability and other aspects of company's business. When you're using AI in HR, when you're using a vendor or you're designing it yourself, the company is 100 percent liable for that decision. That's not just because the government or class action lawyers want to necessarily pick on companies and give the vendors a free pass. That's not how it works at all. It's because of the statutory limitations.

If you take a step back, what are you asking these tools to do? It's to help you either make or assist you make an employment decision. There's only a finite amount of employment decisions: Hiring, firing, wages, training, benefits, promotions. At the end of the day, there's going to be an employment decision. And as you know, under Title VII of the Civil Rights Act, when it was enacted in the 1960s, the law says that an employment decision in the United States can either be made by three parties: A company, a staffing agency or a union. And that's it. That's the limitations of who can make an employment decision. That's the limitations of jurisdictions of who can be liable for these tools.

So, from a government enforcement agency, you're looking at the decision that was made and it doesn't matter if a human made it or an AI made it, the company, staffing agency or union made that decision. And that's where the jurisdiction and venue will lie on who's responsible for using these tools.

Lazzarotti

Keith, you live in D.C. As we tape this, it's the morning following the election [11.06.24], interestingly enough. I'm curious, I know our listeners may be curious as well: What do you think - and this may be certainly something that is evolving and will continue to evolve - is the pulse on the Hill around AI? What are you seeing about the direction of regulation and what companies might expect and when?

Sonderling

There generally wasn't a lot of interest in AI until ChatGPT and generative AI came out and everyone can use it. The EEOC was one of the first agencies to address this with the Algorithmic Fairness Initiative back in October of 2021. So, we were really looking at these issues way in advance of other agencies and, of course, Congress.

Since ChatGPT came out worldwide, everyone wants to talk about it. It's the hottest topic and how do we regulate this? There's a lot of discussions about that. When you look at it, I do think it's a bipartisan issue - if you look when President Trump was in office with the executive orders he issued about using AI and developing AI related to our American core values, and then President Biden, his executive order, really focusing in on some of the civil rights implications. Even in the Senate, the committees, there's bipartisan interest in getting this right. Because I think a lot of people understand that this technology is the future, and how do we ensure that all these protections still exist as this technology becomes more complicated.

From the executive branch, if you look at the EEOC, the FTC and other agencies who are charged with enforcing these old laws, the EEOC from the 1960s, have been very strong on ensuring that all the civil rights protections and all the existing laws still apply equally and getting less in the distraction of "do we need new laws? Are there even laws opposing going back to the basics of what I just said about employers are making employment decisions? That is what these agencies are going to regulate when you're using AI.

I think there's a lot of distraction out there about whether or not you should use these programs because there's no laws or there's going to be laws. You just kind of have to go back to basics, saying that all these agencies have a mandate and AI is going to certainly impact that significantly. But that doesn't ignore the fact that a lot of people want to regulate in this space, especially in the HR space.

In that absence of a new federal regulation, you're seeing states and foreign governments really get involved in this space. It's interesting if you look at what they're doing, they're saying using AI in HR is one, especially in Europe, in the EU, is one of the higher risk categories of using AI. New York City with the local law 144 came out saying, if you're going to use AI in HR in New York, here are additional requirements you're going to need to do. And the same goes for proposals in California and in Colorado. So, you're starting to see common themes come out of these AI-specific laws to HR, where it's going to require transparency - telling applicants or employees maybe what vendor they're using, how these AI assessments are going to look at their application, how it's going to look at their resume, how it's going to work in their interview. But I think one of the most important themes are coming out of these proposals is that mandatory bias audit testing in advance and as you use those programs.

Felsberg

Just to follow up on that, because I agree, a big part of at least several of these laws is the question of bias. As an employer, a company, how are they supposed to test for bias? Should they be doing that in advance of deployment? If so, how do they go about doing that?

Sonderling

This is sort of where a lot of these new proposals, whether out of Brussels with the EU AI Act, whether it's New York City requiring you to do a bias audit in HR, [come in]: "Well, how do you do that?" And then it goes back to the principles I've been talking about before, which have existed. The most well-known way to conduct a bias audit in employment is the 1978 EEOC's Uniform Guidelines on Employee Selection Procedures, the four-fifths rule: It gives, and has been giving, employers a clear way to ensure that any kind of employment assessments - because a lot of these AI tools are just that - are done in accordance with these longstanding principles to see if they're doing what they're supposed to be doing or if they're having a disparate impact on certain groups.

You see places like New York or California in their proposals or Colorado start to mandate that. I've argued those are things that employers can be doing now 2014 and that a lot of employers have been doing with assessments. You don't necessarily need to fear some of the new regulations where, if they're asking you to do things, there's already a lot of existing guidance on. The EEOC put out guidance in May of 2023 on this exact issue, saying "Here's how you conduct a bias audit using artificial intelligence; here are the longstanding principles that you should do." And if you voluntarily do that before you're required to do it by states or foreign governments, you're just going to be in a better position using your AI within your organization.

That's something I really advocated for: That sort of self-governance, that self-auditing using these principles, ensuring that before you ever let a tool make a decision on someone's livelihood, you can run these bias audits to see if it's discriminating against certain groups in advance. Technology allows you to do that much faster, much cheaper than before - when somebody puts out a job advertisement and then there's discrimination later on, and that discrimination has already happened, versus testing it in advance.

Felsberg

Yes. Just going back to the Uniform Guidelines for a moment: If there is bias, if we do have evidence of bias under the Uniform Guidelines, there is an expectation that that tool now be validated, at least arguably. The idea would be - and I always discuss this with employers when we're providing counseling - what is the timing of that validation study? Should you have it validated upfront before you even identify bias or can you wait until the bias is kind of out there and then decide to go and have the tool validated? We would love your thoughts on the obligation to validate and also the timing in connection with that validation.

Sonderling

Obviously, before. Doing it before is certainly best practice to ensure that before the tool goes live you see if it's doing what it's supposed to be doing and not discriminating against certain groups. But I think what's important in the HR space is that each job description, each job advertisement, is going to be different and have different skills requirements. And all of those, as those change - as your organization says, "Okay, we actually need four years of the skill instead of three years or three years instead of two years or we're going to put this additional job requirement in there" - is when it needs to be revalidated. Because that's when you're changing some of the metrics to see if the candidates are qualified or if that new metric you put in is causing disparate-impact discrimination.

So really, as those job descriptions changes, as the skills requirement changes, as performance reviews change - which happen sometimes yearly in an organization, sometimes when there's a reorganization or a merger or an acquisition - those are really important parts to test it. But what you're seeing, going back to the common themes and the new regulations, is that requirement for yearly bias audit testing. And I think that's a good thing because at least it puts a benchmark in: "We're saying we validated at this point. It gave us a chance to go back and look at what the job description and skills are. We felt confident for this year that's what it is." At least you have that certainty moving forward and an endpoint to know where you'll revalidate it - which most employers are not doing now with their current job descriptions.

That's the thing that I've been advocating for: Let's not put more burdens now on regulating artificial intelligence and making it so difficult that employers want to use. We're instituting these through technology that actually allows us to clean up some of these issues and do bias audits more potentially cheaper and more effectively than the practices we have now.

Lazzarotti

It's interesting in terms of different use cases of AI that I think are related here. That is, employers using various technologies, AI powered monitoring activities to measure performance and a whole range of other applications that could involve the collection of confidential medical information or other data about employees.

What are your thoughts on those types of systems - maybe from an ADA perspective, confidential medical information, just general privacy - and employers balancing those issues with AI?

Sonderling

Generally, employers don't have a lot of privacy at work, as you know. Now that these software tools, especially for applicants and current employees, are doing employment assessments, are asking questions, they need to be designed to ensure that they're not soliciting any unlawful information not relevant to the job. And when we see that in some of the tools that do the job interviews, they may ask a question that may have an applicant disclose confidential medical information that not only needs to be protected by the employer, but then has no relevancy to the job. And that's an ADA violation in itself as well. In our first guidance we put out in May of 22 was related to disability discrimination and the ADA, how these tools can't solicit unlawful information. They need to be designed to ensure that people with disabilities can use this as well.

Keeping with the privacy themes, you're right. It's much more than just on the hiring side. There are programs out there that will do performance reviews, that will review and monitor all of the employees' emails, Slack and Teams messages to see not only if they're doing their job, but if they're meeting employer expectations. Although employers are certainly allowed to do that, they can run into issues with the National Labor Relations Board under the National Labor Relations Act if it's prohibiting employees to talk about their workplace conditions, not necessarily for monitoring their employees. So, the privacy aspect is very important, even outside of the EEO world, because then the same tools, which you're worried about the bias side of it, may then implicate not only ADA violations, but also getting into some of the issues relating to employees being able to talk about their workplace conditions.

Felsberg

Keith, that's great. We've covered a lot over the last few minutes. I know we're kind of coming close to our time here together today. One of the things we were hoping you could do is kind of leave our listeners with kind of the top three takeaways from your perspective. What would be your opinion as to the top three things employers need to know before embarking on an AI initiative as part of their employment process?

Sonderling

Number one is ensuring policies and procedures are in place relating to the use of AI in HR. And this is not too difficult because you already have policies and procedures for all your employment practices. And I think [an employer] needs to go back and take a fresh look of saying, "Here's where we're integrating AI and here's the implications of how AI is going to impact those existing policies and procedures," in addition to taking a much more macro approach and having AI governance principles for the whole organization, even outside of employment. You're seeing a lot of the "big tech" companies, the White House, coming out with this "bill of rights" in a sense for employees and consumers using these programs as well.

And then, another thing, too, is with the vendor who's ultimately going to design the technology that you're going to be using. How are you going to ensure that they [the vendor] are providing not only an algorithm that works and that is not going to take protected characteristics in play, but also, how are they going to provide that training within your organization to ensure that those who are using it within your organization are using it properly and not using it improperly to then use these tools to discriminate at scale? You may have an algorithm that works, but employees using it who are not authorized or not trained then may use it improperly.

Finally, it's really starting to keep up with the changing regulatory environment. There are so many governments across the world that are not only interested in AI, but specifically AI and HR, because you're dealing with fundamental civil rights. And it applies to everybody. It's industry agnostic. But, like I said before, keeping up with those trends - you can really get ahead of now by performing those bias audits and making determinations; of saying, if these are where all these states and foreign governments are going regarding disclosure, opt in, opt out, bias audits, you can start picking which ones in advance you think are going to be most relevant for you to get ahead of those potential regulatory changes.

Felsberg

You've given us a lot to think about and we appreciate that. Thank you very much for joining us. We're honored to have you with us. As this continues to unfold in this area, there are going to be even more issues than the ones we touched upon today. Hopefully, we can do this again in the future, and [we] look forward to getting together again with you. Thanks, again. We really appreciate it.

Sonderling

Thanks for having me.

Lazzarotti

Yes, thank you so much.

OUTRO

Thank you for joining us on We get work™. Please tune into our next program where we will continue to tell you not only what's legal, but what is effective. We get work™ is available to stream and subscribe to on Apple Podcasts, Libsyn, SoundCloud, Spotify and YouTube. For more information on today's topic, our presenters and other Jackson Lewis resources, visit jacksonlewis.com.

As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client-lawyer relationship between Jackson Lewis and any recipient.