Texas Association of Broadcasters

10/11/2024 | Press release | Distributed by Public on 10/11/2024 14:24

TAB Outlines AI Concerns for State Lawmakers

posted on 10.11.2024

- Rejects FCC's Proposed Disclosure for Political Ads

State lawmakers, who are three months from the start of the next legislative session, have been holding committee hearings exploring issues likely to command their attention next year, including matters related to how Artificial Intelligence could impact private enterprise and government functions alike.

TAB has weighed in with the state Senate and House committees studying the technology, sharing concerns about much-needed improvements to current law regarding deepfakes in political video ads and protections for broadcast journalists and the news content they create.

A state law already on the books prohibits the use of deepfake videos in political ads to be published or distributed within 30 days of an election, with penalties applied to the person who creates and distributes the material.

TAB is recommending that the law's current timeframe for prohibited material be either eliminated or lengthened and that the focus on political video ads be expanded to include audio and other forms of political discourse where deepfakes could be inserted.

"Doing so would ensure that our journalists could report on potential deepfakes so we can responsibly inform audiences of doubts about the veracity of certain communications while also disclosing that the video or audio reported upon may not be authentic," said Paul Watler, attorney with TAB's general counsel law firm Jackson Walker LLP, in testimony before the House committee.

TAB also is encouraging lawmakers to consider adopting a state version of the federal "NO FAKES Act" which last month won the support of the NAB, the Motion Picture Association, the Human Artistry Campaign, major talent agencies and others.

The legislation would, in part, protect the voice and visual likeness of all individuals, including trusted broadcast news anchors and local on-air personalities, from unauthorized computer-generated recreations made by generative AI.

This protection would create a federal remedy for individuals to fight back against abusive and manipulative deepfakes that threaten to disrupt the trust local broadcasters have earned from their communities.

The bill would also provide important exclusions for use of digital replicas in certain bona fide news reporting and broadcasting, as well as commentary, criticism, scholarship, satire, parody, and other First Amendment speech.

Watler also shared concerns about the unauthorized and uncompensated ingestion of broadcasters' expressive content by generative AI models, noting that broadcasters' content is ideal for AI ingestion because the content is professionally and carefully vetted and, therefore, trusted.

"The Big Tech behemoths - each of whose individual market capitalization exceeds the entire broadcast industry's - recognize this fact and should be prohibited from engaging in this practice," he said.

TAB Pushes Back on FCC's Proposed AI Disclosure Rule

On the federal front, Congress has failed to agree on an approach toward regulating deepfakes in political ads, so the FCC is being pressured to step in with some kind of disclosure requirements while the Federal Elections Commission remains paralyzed.

The FCC proposes requiring broadcasters to place an AI disclaimer on any political ad that was created using AI, even if the use of AI was nothing more than a camera lens setting.

TAB weighed in against the proposal because the FCC action would be limited to just broadcasters, and possibly cable, because the FCC lacks jurisdiction over advertisers or other media, thereby placing more burdens on broadcasters while redirecting ad buys to unregulated media.

Questions? Contact Oscar Rodriguez or call (512) 322-9944.