12/12/2024 | Press release | Distributed by Public on 12/12/2024 10:54
Generative artificial intelligence (AI) has grown rapidly in recent years, but direct regulations for building, deploying, and using AI are just starting to ramp up. How can you make sure your generative AI efforts are compliant? Several U.S. states have recently drafted or passed generative AI regulations, offering some guidance on how your business should prepare to use this technology in 2025.
Lawmakers have been treading carefully about establishing AI regulations, and for good reason. The technology is being created (and adopted) at light speed, and leaders don't want to get in the way of innovation. But at the same time, there's a need for guardrails that allow for responsible AI use that protects users and their privacy.
Independent agencies and U.S. state legislatures have made progress on AI frameworks in recent years. But with newly-elected officials heading to Washington in January, it's unknown how these regulations could change. In this article, we'll get you up to speed on emerging laws that regulate AI directly, and what you need to know to be safe and compliant.
Generative AI can help employees get more done (81% of desk workers polled said AI improves efficiency), but it needs to be deployed responsibly.
These are still fairly early days for generative AI regulations, but here's a quick look at some recent developments to help you understand the progress that has been made.
It's recommended that you work closely with your legal department (or a trusted product AI legal partner) to make sure you're in compliance.
"I would encourage companies to work with their legal organization to figure out what these laws actually mean," said Danielle Gilliam-Moore, director of global public policy at Salesforce. "At a policy level, we have heard that the lawmakers are looking to create bills that don't crush innovation. There are some that are looking for a harmonized approach here."
Many U.S. states have started to enact AI regulations, with Colorado having some of the most comprehensive legislation so far. While it's too early to tell how the U.S. federal government will handle this constantly-evolving topic, you can learn from what states have passed recently.
In September, California passed two key AI bills: SB-942 California AI Transparency Act & AB 2013, both of which go into effect on Jan. 1, 2026.
SB-942 not only requires companies that develop publicly available, widely adopted generative AI programs to provide free AI detection tools, but also mark when content has been generated by AI. AB 2013 makes it so large AI developers must publicly disclose a summary of the data they've used to train generative AI systems.
In September, California also enacted AB 1008, an amendment to the state's consumer privacy act, clarifying that "personal information" can include AI systems that can generate such data.
In May, Colorado passed SB24-205, a comprehensive AI framework. This bill requires companies working on "high-risk" AI systems to establish a risk management policy and program, and requires them to conduct an impact assessment. This goes into effect on Feb. 1, 2026.
With HB 3773, Illinois amended its Human Rights Act to prevent employers from using AI that could result in illegal discrimination in employment decisions. This bill passed in August and will go into effect on Jan. 1, 2026.
Illinois passed 3 more laws in 2024 related to AI:
Earlier this year, New Hampshire passed HB 1688, which prohibits state agencies from using AI to surveil, manipulate, or discriminate against members of the public. This bill was enacted on July 1.
To prevent AI-generated deepfakes, Tennessee amended its Personal Rights Protection Act of 1984 to include protections for someone's voice, likeness, and image. This bill (HB 2091) went into effect on July 1.
Utah passed SB 149, the Artificial Intelligence Policy Act, that includes a set of AI-related laws. This bill establishes a framework for responsible AI use, establishing liability for using AI that violates consumer protection laws if not properly disclosed that a person is interacting with a bot. It also created the Office of Artificial Intelligence Policy, as well as a regulatory AI analysis program. This bill became law on May 1.
Utah is also updating existing laws to account for the challenges posed by AI tools, protecting vulnerable populations and ensuring transparency.
Concerns around AI are not new, with discussions covering possible job loss, inequality, bias, security issues, and more. With the rapid growth of generative AI after the public launch of ChatGPT in November 2022, new flags include:
These types of challenges have prompted regulators worldwide to investigate how generative AI tools collect data and produce outputs and how companies train the AI they're developing. In Europe, countries have been swift to apply the General Data Protection Regulation (GDPR), which impacts any company working within the EU. It's one of the world's strongest legal privacy frameworks; the U.S. does not have a similar overarching privacy law - though some individual states do.
The EU AI Act features the most thorough generative AI regulations by a government to date, classifying AI by risk factor. As standards and regulations vary from state to state and country to country, one way to ensure that you're using AI responsibly is to be mindful of guidelines in the EU AI Act.
"The thing that's remarkable, and that I've seen duplicated in other markets, is the risk categorization that the EU AI Act has set out," Gilliam-Moore said. "So I think that's one thing that companies should be looking at right now. How do their products and services line up against what the EU AI Act is saying is a high-risk application? That would give them a moderate understanding of what might be some obligations that they would have to fulfill."
Learn how to use the technology ethically while also recognizing and removing bias when you create AI. Discover how on Trailhead, the free online learning platform from Salesforce.
Companies continue to wonder how these tools will impact their business. It's not just what the technology is capable of, but also how regulation will play a role in how businesses use it. Where does the data come from? How is it being used? Are customers protected? Is there transparency?
No matter where your company does business or who you interact with - whether developing the technology for other companies to use or interacting directly with consumers - ensure you speak with lawyers who are following generative AI regulations and can help guide you through your process.
If you're in a state or country without defined generative AI regulations, consult your legal team before implementing AI into your products or services.
You can also work with a trusted AI legal partner who can inform your product teams on the legalities of AI usage. This will help ensure compliance from the start.
If you want to integrate AI into your products or services, you should decide if you want to build it yourself or go with a company (like Salesforce) that has a history developing trustworthy AI products.
Going with a third-party vendor can give you the confidence that your AI integration will be in compliance, and alert you to any risks.
Regulators have been concerned about how companies collect data and how that information gets delivered to users. Having an acceptable use policy - an agreement between two or more people (like a business and its employees or a university and students) outlining proper use when accessing a corporate network or internet - can help safeguard compliance.
In addition, it is important to show data provenance, a documented trail that can prove data's origins and where it currently sits.
As generative AI regulations change and adapt, it's important to stay aware of the latest developments. You can contact your local representatives to advocate for proper AI regulations, helping them craft regulations that build guardrails and boost innovation.
You can also use regulatory trackers like the ones offered by the International Association of Privacy Professionals to make sure your AI efforts are in compliance.
Agentic AI - such as Agentforce - is still in the early days, but it's growing as businesses realize the benefits of this technology.
It's important to keep an eye on generative AI regulations, and how they'll affect AI agents.
"I think lawmakers are very aware of the labor aspects of how AI might affect the workforce, but once agentic AI becomes more commonplace, that might uplevel that conversation," Gilliam-Moore. "I see that debate is a bit more in the private sector as of right now. But I think the regulators are aware that AI is going to continue to evolve."
AI development can come with risks. This is why Salesforce supports tailored, risk-based AI regulation. It differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation.
Ari Bendersky contributed to this blog post.
I'm a writer and editor based in Las Vegas, currently coaching writers and strengthening SEO as a Senior Editor at Salesforce. I've contributed to Adweek, the Las Vegas Review-Journal, the San Diego Union-Tribune, the East Bay Times, and many more outlets.
More by Justin