Results

ITIF - The Information Technology and Innovation Foundation

12/16/2024 | Press release | Distributed by Public on 12/15/2024 23:13

Why AI-Generated Content Labeling Mandates Fall Short

Why AI-Generated Content Labeling Mandates Fall Short

|
December 16, 2024

Mandatory labeling for AI-generated content, particularly through watermarking, is neither a reasonable nor effective solution to the issues policymakers seek to address. Rather than singling out AI-generated content, policymakers should prioritize building trust within the digital ecosystem as a whole.

Generative AI (GenAI) enables users to produce high-quality digital content such as images, text, music, and video. This technological advancement has enriched many creative possibilities, boosted workers' productivity, and offered new tools for innovation.1 Most output from GenAI systems is beneficial and harmless, but some policymakers are concerned about the technology's potential misuse, including to spread disinformation through fabricated content, violate IP rights from AI-generated imitations of existing works, and create harmful deepfakes, such as impersonations used to perpetuate fraud or exploitative content such as unauthorized AI-generated nudes of individuals.

Policymakers have called for mandatory labeling of all AI-generated content; however, this approach has serious limitations. While labeling AI-generated content, particularly through watermarking, may help users identify some AI-generated material, requiring it for all AI-generated content would be impractical and ineffective because of diverse content, limited resilience to manipulation, and varying regulatory requirements. More importantly, doing so would fall short in addressing policymakers' primary concerns, namely disinformation, IP rights violations, and deepfakes.

This report begins by outlining the main approaches to labeling AI-generated content and then highlights key AI labeling regulations and initiatives from around the world. Following this, the report examines why mandatory labeling, despite its appeal to certain policymakers, is not a good policy option. Finally, it emphasizes the importance of enhancing transparency for all content-whether human- or AI-created-and developing targeted strategies to address the malicious use of GenAI. Instead of mandating technically complex and permanent labels on AI-generated content, this report proposes promoting voluntary labels for all online content through established standards such as the Coalition for Content Provenance and Authenticity (C2PA).

The report provides several recommendations for policymakers to strengthen trust in digital content:

  1. Encourage voluntary adoption of adding labels for all digital content through an established industry standard such as C2PA, which embeds cryptographically secure metadata.
  2. Launch digital, AI, and media literacy campaigns for users to assess digital content's authenticity and trustworthiness and make informed decisions about the content they consume.
  3. Develop targeted responses to problems such as misinformation, IP rights violations, and deepfakes, rather than broadly labeling AI content.

Read the report.