salesforce.com Inc.

12/12/2024 | Press release | Distributed by Public on 12/12/2024 10:54

Generative AI Regulations – What Your Business Needs To Know for 2025

Generative AI Regulations - What Your Business Needs To Know for 2025

As generative AI regulations change and adapt, it's important to stay aware of the latest developments. [Image: Vector Juice / Adobe Stock]

Get the latest news around generative artificial intelligence regulations and insights from our director of global public policy on how it can affect your business.

Share article

Generative artificial intelligence (AI) has grown rapidly in recent years, but direct regulations for building, deploying, and using AI are just starting to ramp up. How can you make sure your generative AI efforts are compliant? Several U.S. states have recently drafted or passed generative AI regulations, offering some guidance on how your business should prepare to use this technology in 2025.

Lawmakers have been treading carefully about establishing AI regulations, and for good reason. The technology is being created (and adopted) at light speed, and leaders don't want to get in the way of innovation. But at the same time, there's a need for guardrails that allow for responsible AI use that protects users and their privacy.

Independent agencies and U.S. state legislatures have made progress on AI frameworks in recent years. But with newly-elected officials heading to Washington in January, it's unknown how these regulations could change. In this article, we'll get you up to speed on emerging laws that regulate AI directly, and what you need to know to be safe and compliant.

What you'll learn

What you need to know about generative AI regulations

Generative AI can help employees get more done (81% of desk workers polled said AI improves efficiency), but it needs to be deployed responsibly.

These are still fairly early days for generative AI regulations, but here's a quick look at some recent developments to help you understand the progress that has been made.

  • The U.S. AI Safety Institute convened the International Network of AI Safety Institutes, "a new global effort to advance the science of AI safety and enable cooperation on research, best practices, and evaluation," for the first time. And in February, France will host the A.I. Action Summit.
  • OpenAI recently presented a proposal of how the U.S. government can support the AI industry to a think tank in Washington, D.C. According to the Washington Post, the proposal "calls for special economic zones with fewer regulations to incentivize new AI projects."
  • President-elect Donald Trump noted that he will repeal President Joe Biden's 2023 executive order on developing AI, and support "AI Development rooted in Free Speech and Human Flourishing."
  • Earlier this year, the European Union passed the EU AI Act, which went into action on Aug. 1. The EU AI Act establishes a common framework for AI regulations in the region, classifying types of AI by their risk of causing harm.
  • In September, the United Nations published its Governing AI for Humanity report, providing frameworks for both public and private sector institutions to consider.

It's recommended that you work closely with your legal department (or a trusted product AI legal partner) to make sure you're in compliance.

"I would encourage companies to work with their legal organization to figure out what these laws actually mean," said Danielle Gilliam-Moore, director of global public policy at Salesforce. "At a policy level, we have heard that the lawmakers are looking to create bills that don't crush innovation. There are some that are looking for a harmonized approach here."

(Back to top)

How U.S. states are regulating AI

Many U.S. states have started to enact AI regulations, with Colorado having some of the most comprehensive legislation so far. While it's too early to tell how the U.S. federal government will handle this constantly-evolving topic, you can learn from what states have passed recently.

California

In September, California passed two key AI bills: SB-942 California AI Transparency Act & AB 2013, both of which go into effect on Jan. 1, 2026.

SB-942 not only requires companies that develop publicly available, widely adopted generative AI programs to provide free AI detection tools, but also mark when content has been generated by AI. AB 2013 makes it so large AI developers must publicly disclose a summary of the data they've used to train generative AI systems.

In September, California also enacted AB 1008, an amendment to the state's consumer privacy act, clarifying that "personal information" can include AI systems that can generate such data.

Colorado

In May, Colorado passed SB24-205, a comprehensive AI framework. This bill requires companies working on "high-risk" AI systems to establish a risk management policy and program, and requires them to conduct an impact assessment. This goes into effect on Feb. 1, 2026.

Illinois

With HB 3773, Illinois amended its Human Rights Act to prevent employers from using AI that could result in illegal discrimination in employment decisions. This bill passed in August and will go into effect on Jan. 1, 2026.

Illinois passed 3 more laws in 2024 related to AI:

  • Deepfake Child Sexual Abuse Material Law (IL HB 4623), which clarifies that state laws prohibiting child pornography include AI-generated images of children engaged in or simulating sexual acts.
  • Digital Voice and Likeness Protection Act (IL HB 4762), which prohibits contracts for services using digital replicas of a person's image or voice without specific descriptions of intended uses.
  • Amendment to Illinois Right of Publicity Act (IL HB 4875), which prohibits distributing or making available sound recordings or audiovisual works containing digital replicas of a person's image or voice without their knowledge.

New Hampshire

Earlier this year, New Hampshire passed HB 1688, which prohibits state agencies from using AI to surveil, manipulate, or discriminate against members of the public. This bill was enacted on July 1.

Tennessee

To prevent AI-generated deepfakes, Tennessee amended its Personal Rights Protection Act of 1984 to include protections for someone's voice, likeness, and image. This bill (HB 2091) went into effect on July 1.

Utah

Utah passed SB 149, the Artificial Intelligence Policy Act, that includes a set of AI-related laws. This bill establishes a framework for responsible AI use, establishing liability for using AI that violates consumer protection laws if not properly disclosed that a person is interacting with a bot. It also created the Office of Artificial Intelligence Policy, as well as a regulatory AI analysis program. This bill became law on May 1.

Utah is also updating existing laws to account for the challenges posed by AI tools, protecting vulnerable populations and ensuring transparency.

(Back to top)

The backstory on generative AI regulations

Concerns around AI are not new, with discussions covering possible job loss, inequality, bias, security issues, and more. With the rapid growth of generative AI after the public launch of ChatGPT in November 2022, new flags include:

  • Privacy issues and data mining: Companies need to provide transparency around how they're using data.
  • Copyright concerns: Because generative AI tools pull from vast data sources, the question of how existing copyright laws apply to generative model training is a key issue.
  • Misinformation: False information could spread more quickly with AI chatbots, which also have created entirely inaccurate stories called hallucinations.
  • Attribution: Is what you're reading created by a human or chatbot? What sources does it refer to? There is the need to verify articles, social media posts, art, and more.
  • Child protection: There's been a call to ensure children and teenagers are protected against alarming, AI-generated content on social media.

These types of challenges have prompted regulators worldwide to investigate how generative AI tools collect data and produce outputs and how companies train the AI they're developing. In Europe, countries have been swift to apply the General Data Protection Regulation (GDPR), which impacts any company working within the EU. It's one of the world's strongest legal privacy frameworks; the U.S. does not have a similar overarching privacy law - though some individual states do.

The EU AI Act features the most thorough generative AI regulations by a government to date, classifying AI by risk factor. As standards and regulations vary from state to state and country to country, one way to ensure that you're using AI responsibly is to be mindful of guidelines in the EU AI Act.

"The thing that's remarkable, and that I've seen duplicated in other markets, is the risk categorization that the EU AI Act has set out," Gilliam-Moore said. "So I think that's one thing that companies should be looking at right now. How do their products and services line up against what the EU AI Act is saying is a high-risk application? That would give them a moderate understanding of what might be some obligations that they would have to fulfill."

(Back to top)

Create AI responsibly

Learn how to use the technology ethically while also recognizing and removing bias when you create AI. Discover how on Trailhead, the free online learning platform from Salesforce.

How your business can stay compliant with AI regulations

Companies continue to wonder how these tools will impact their business. It's not just what the technology is capable of, but also how regulation will play a role in how businesses use it. Where does the data come from? How is it being used? Are customers protected? Is there transparency?

No matter where your company does business or who you interact with - whether developing the technology for other companies to use or interacting directly with consumers - ensure you speak with lawyers who are following generative AI regulations and can help guide you through your process.

Work with your legal team (or a trusted AI partner)

If you're in a state or country without defined generative AI regulations, consult your legal team before implementing AI into your products or services.

You can also work with a trusted AI legal partner who can inform your product teams on the legalities of AI usage. This will help ensure compliance from the start.

Decide if you're going to build internally or with a third party

If you want to integrate AI into your products or services, you should decide if you want to build it yourself or go with a company (like Salesforce) that has a history developing trustworthy AI products.

Going with a third-party vendor can give you the confidence that your AI integration will be in compliance, and alert you to any risks.

Draft an acceptable use policy

Regulators have been concerned about how companies collect data and how that information gets delivered to users. Having an acceptable use policy - an agreement between two or more people (like a business and its employees or a university and students) outlining proper use when accessing a corporate network or internet - can help safeguard compliance.

In addition, it is important to show data provenance, a documented trail that can prove data's origins and where it currently sits.

Stay updated on changing regulations

As generative AI regulations change and adapt, it's important to stay aware of the latest developments. You can contact your local representatives to advocate for proper AI regulations, helping them craft regulations that build guardrails and boost innovation.

You can also use regulatory trackers like the ones offered by the International Association of Privacy Professionals to make sure your AI efforts are in compliance.

(Back to top)

What does this mean for agentic AI?

Agentic AI - such as Agentforce - is still in the early days, but it's growing as businesses realize the benefits of this technology.

It's important to keep an eye on generative AI regulations, and how they'll affect AI agents.

"I think lawmakers are very aware of the labor aspects of how AI might affect the workforce, but once agentic AI becomes more commonplace, that might uplevel that conversation," Gilliam-Moore. "I see that debate is a bit more in the private sector as of right now. But I think the regulators are aware that AI is going to continue to evolve."

(Back to top)

The need for trusted AI regulation

AI development can come with risks. This is why Salesforce supports tailored, risk-based AI regulation. It differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation.

Ari Bendersky contributed to this blog post.

Share article

Explore related content by topic

Justin Lafferty Senior Editor, SEO

I'm a writer and editor based in Las Vegas, currently coaching writers and strengthening SEO as a Senior Editor at Salesforce. I've contributed to Adweek, the Las Vegas Review-Journal, the San Diego Union-Tribune, the East Bay Times, and many more outlets.

More by Justin