Jackson Lewis LLP

11/08/2024 | News release | Distributed by Public on 11/08/2024 09:38

We Get AI for Work: Tackling the Challenges of Diverse AI Systems

Details

November 8, 2024

AI is a frequently used term that is only sometimes fully understood in the workplace or legal context. Since no single type of AI performs all functions, employers must identify which type or subset of AI-such as generative AI, machine learning, predictive analytics, or natural language processing-their organization is using or considering. Additionally, understanding how various laws and regulations may impact AI's use is crucial.

Transcript

INTRO

While it has become more than commonplace to see the term AI utilized in an infinite variety of business, technology and cultural contexts, its precise definition and implication is not always fully understood in the workplace or in law. And when it comes to adoption, as no single type of AI performs all functions, employers must identify which type or subset of AI-generative AI, machine learning, predictive analytics, or natural language processing-their organization is considering, or already using. So understanding how various laws and regulations may impact AI's use is crucial.

On this episode of our AI podcast series, We get AI for work, we delve into a deeper understanding of the different types of AI and the evolving legal landscape that is shaping and being shaped by AI.

Today's hosts are Eric Felsberg, principal in Jackson Lewis's Long Island office, and Joe Lazzarotti, principal in the firm's Tampa office, and co-leaders of the firm's AI Service Group.

Eric and Joe, given the diverse types of expanding use and adoption of AI and similar technologies, and the various inconsistent approaches state regulators are taking by focusing on different aspects of AI use, the question that's on everyone's mind today is: how can organizations strike a balance between AI reliance and local law compliance, and how that impact my organization?

CONTENT

Joseph J. Lazzarotti
Principal and Privacy, Data and Cybersecurity Co-Leader

We're here on our next installment of our We Get AI podcast. I'm Joe Lazzarotti. I'm in our Tampa office and I have the pleasure of being here with my partner Eric Felsberg. Today, we wanted to talk a little bit about what exactly is AI.

We're getting a lot of questions from clients who want to have a policy to provide some guidelines around the use of AI. And one of the questions that comes up is, "What kind of AI are you using?" Are you using more AI machine-learning types of technology? Are you using generative AI like ChatGPT?

Maybe one place to start with that, Eric, is talking about the definition of how we think about those two categories.

Eric J. Felsberg
Principal and Artificial Intelligence Co-Leader

Thanks, Joe. Yes, I think that's right. What's been on the scene most lately has been generative AI. AI has been around for a while - the traditional AI, which a lot of us are used to, the AI that typically focuses on a specific task. It's designed to respond to a particular set of inputs. It largely has the capacity to learn from data and is used to make predictions. So, a lot of data analysis and predictive analytics; that's really kind of contemplating traditional AI.

To think of an example of traditional AI, how that would be used in the workplace - and a lot of employers may be familiar with this - think about a job vacancy. There's a thousand applications you've just received in response to that job vacancy announcement and you use traditional AI to help you vet those thousand applications. Essentially maybe what it's doing is it's searching the application and looking for content matches, those that most closely match the job posting or the job description. So, it's really kind of analyzing data and making predictions. That's traditional AI.

Generative AI, this is the one that lately has been receiving all the buzz. Generative AI really is an AI platform that we use to create something new. It's also trained on data. It learns from underlying patterns. But as Joe mentioned a moment ago, ChatGPT from OpenAI, their language prediction model, that's an example of generative AI.

If you think about in contrast to traditional AI, what would be a use case for generative AI? Just sticking with the same very simple example I gave where you have a job vacancy announcement, you have a thousand applications, you're using traditional AI to help you vet those. But if you were trying to maybe create an application form or create a job posting, that's where generative AI can come in. If it learns from a certain set of data that we have in terms of our qualifications and the like that we look for in a successful job applicant, how do we create a job posting? Can we create an application form? That's where generative AI would come in. It would help you create new content.

Lazzarotti

That's a great example. The idea is thinking about how do we take a bunch of data and obtain some predictive value from it - have the AI learn from that data, maybe as you introduce new data and kind of get some helpful information or insights or predictive value in terms of the applicants as an example - versus creating or coming up with new content based on some content that you feed into the algorithm or the large language model or whatever the model that you're using. That [the latter] would be more generative AI. Other examples might be deep fake technology, looking at generating someone's voice or video from generative AI

In terms of how that [generative] technology is being used, I want to get your view on this too, Eric. How are we seeing the laws develop? Because it seems to me, when I look at some of the statutes and regulations that are being drafted and some that have passed, in some cases, the law is really focused on the more traditional AI models that you talked about while others are trying to address some of the risks that are coming with the generative AI.

For example, on the generative AI side, there's a statute that was passed in Tennessee trying to help protect the image of celebrities, for example. It seems like a little bit broader of a statute, although similar in vein in terms of the use of one's likeness in Illinois, where the concern is we want to be careful about creating someone's image and using it in a way that can hurt them or for financial gain. But what are you seeing in terms of the traditional AI approach, focusing at just a high level at this point.

Felsberg

It's actually pretty interesting to look at these different jurisdictions and try to see what that legislature in that particular jurisdiction is concerned about. With traditional AI, one of the concerns that we always have whenever you're doing any sort of data analysis - in the employment context, making employment decisions - is the issue of disparate impact and bias.

Most notably in recent memory is what they call the AEDT law in the City of New York. That's the automated employment decision tool law, and that really focuses on the question of disparate impact. And without going through all the requirements of what that particular law in New York City requires, one of the primary requirements of the law is that any employer who's, and I'm paraphrasing, using an AI platform to make employment decision tools - so think about the example I gave earlier where you're using it to make hiring decisions based off of an applicant pool - has to not only perform the bias audit, which has to be performed by an independent party, but then what they also have to do is publish it.

So, you know, think about this for a minute. New York City is requiring you to conduct a disparate impact analysis by race, sex and ethnicity, and then take that analysis and publish it on your website in the career section for kind of all the world to see. The theory being that job seekers would have that information and [it would] inform their application to a particular job. That's just one example, but it is interesting to see how different jurisdictions are focused on particular aspects of AI usage.

Lazzarotti

Yes. For this episode, we're getting these questions from clients about policies and how do we create governance around this technology? And I know we are planning on some installments in the podcast to come on policy development. But I do think it is important before you put pen to paper to say, "Well, what is this AI? How are we planning to use it in our organization? What are our use cases? What are we trying to regulate exactly?" Because on the one hand, what you're talking about in terms of traditional AI, just as one way to think about policy development, may be focused on a smaller group within the organization that may be vetting or developing more traditional AI technologies, whereas generative AI may be a broader group of employees that may be utilizing that type of model of AI or application, and then in different departments. So, it really is kind of a threshold question that companies need to consider when they're trying to start out and think about how they introduce that technology into their organization. You agree with that?

Felsberg

Yes. It brings up this issue of overreliance. AI has really come onto the scene in a strong way, and I think some folks think of it as kind of a magic box: "It's going to help me make all my decisions and I'll kind of sit back and put my feet up." Certainly, when you're in the position of over relying on algorithms to drive decisions - again, going back to the very simple example that I used - if we're just going to rely entirely on the output of whatever that tool is in terms of the matching algorithm between the job qualifications and the content of somebody's employment application, that's kind of what New York City is looking at.

Separate and apart from New York City, if you focus just on the output with nothing more and rely on that, that's a pretty dangerous game to play. Because number one, you have to think about disparate impact and bias. But you also have to think about unlawful recommendations. The output of the tool may be perfectly sound from an analytical perspective, but what it's recommending may not be in compliance with local law in terms of the type of person, for example, that you should be selecting for employment. So that's one issue that we think about with traditional AI.

Certainly with generative AI, we have to also worry about getting accurate results. There have been some stories in the media about folks, and some of them were lawyers - Joe, you and I are lawyers, so it's top of mind for us - blindly relying on the output of a generative AI platform. Generative AI has been known to hallucinate, meaning producing content that looks very compelling but is inaccurate or maybe doesn't even exist; getting these inaccurate results and flawed outcomes.

It really is a question of balance. There still needs to be human oversight over some of these tools. I always said, "I think one of the major shortcomings of AI was kind of its inability to reason." And I always thought, "Well, that's where you need a human to insert themselves to do some of the reasoning." But Joe, you and I were talking about a new development that we were just reading about in terms of some technologies out there, AI technologies, that perhaps will do some of that reasoning for us. I don't know if you want to comment on that for a moment.

Lazzarotti

I'm glad you raised that because it's that article that we were kind of going back and forth on from Wired about open AI's [effort] code-named Strawberry. It's the next evolution that they're rolling out. It provides more complex problem-solving at a higher level that the AI model can deliver to users. And you're right. Coming back to this idea of how do we roll it out in the organization and what are the considerations? And maybe there's traditional, maybe there's generative AI, and then maybe there's "Strawberry" - I don't know what they're going to call it. You really have to think about what data are we talking about that goes into the tool, what data comes out? Is it accurate to the point you were raising and what are the risks and rewards that we're trying to avoid and trying to derive from the application? All those things need to be factored into how the organization uses it and what steps it ought to take, and what considerations it should take into account.

Felsberg

Yes. I didn't mention earlier - I was spending time in the New York City law - you also have to keep in mind that there's been a lot of federal enforcement action around the use of AI. We've seen the EEOC get involved. For those federal contractors out there, the US Department of Labor, the Office of Federal Contract Compliance Programs, they're interested in your use of AI. I say that to say that there really still needs to be human oversight. We need to understand what it is that these platforms are producing, and we need to be very deliberate in terms of information that we're submitting or information that we're acting upon. You really need to be careful.

Lazzarotti

That's probably a good place to end this one. As we've been doing so far on these episodes, we definitely want to get any ideas from our listeners. If you want to be a guest and talk with us about maybe what you're doing with AI, we'd love to talk to you. Definitely hit us up, email us at [email protected].

Eric, always a pleasure.

Felsberg

Likewise, Joe. I appreciate it.

OUTRO

Thank you for joining us on We get work™. Please tune into our next program where we will continue to tell you not only what's legal, but what is effective. We get work™ is available to stream and subscribe to on Apple Podcasts, Libsyn, SoundCloud, Spotify and YouTube. For more information on today's topic, our presenters and other Jackson Lewis resources, visit jacksonlewis.com.

As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client-lawyer relationship between Jackson Lewis and any recipient.