Watermarking AI content is a great idea in theory. But would it work? I imagine it could for video and imagery, maybe even audio. But what if it’s made with the myriad of open source tools available!? And for text? For now it’s often easy enough to tell when people aren’t using very sophisticated prompting for writing with AI, but it’s getting increasingly hard to tell. And I have no idea how passing it through a filter or two wouldn’t be able to spoof it… This will be an interesting space to watch!
OpenAI, Adobe, and Microsoft have recently backed a California bill that mandates watermarks on AI-generated content. According to TechCrunch, the bill, known as AB 3211, is set for a final vote in August. This legislation aims to require tech companies to label AI-generated photos, videos, and audio clips with watermarks in their metadata. While many AI companies already do this, the bill also pushes for large online platforms like Instagram and X to make these labels clear and understandable to the average viewer.
The Coalition for Content Provenance and Authenticity, which includes OpenAI, Adobe, and Microsoft, helped develop the C2PA metadata standard used for marking AI-generated content. Interestingly, a trade group representing these companies initially opposed the bill in April, describing it as “unworkable” and “overly burdensome.” However, amendments to the bill seem to have swayed their opinion.
How It Works
The bill requires AI-generated content to include watermarks in its metadata. This means that any photo, video, or audio clip created by AI will have a digital marker embedded in its file information. Additionally, platforms like Instagram and X will need to display these labels in a way that is easily understood by the general public.
Benefits
- Transparency: Users will be able to distinguish between human-created and AI-generated content, fostering trust.
- Accountability: It will be easier to track the origins of digital content, reducing the spread of misinformation.
- Standardization: The use of C2PA metadata helps create a uniform approach to labeling AI content, making it easier for users to understand.
Concerns
- Implementation: Smaller companies might find it challenging to comply with the new regulations due to resource constraints.
- User Awareness: Despite the labels, users may still ignore or misunderstand them, limiting the bill’s effectiveness.
- Privacy: There could be concerns about how metadata is used and whether it could infringe on user privacy.
Possible Business Use Cases
- Verification Services: A startup could offer services to verify the authenticity of digital content for businesses and consumers.
- Compliance Tools: Develop software that helps smaller companies comply with the new watermarking regulations.
- Educational Platforms: Create platforms that educate users on how to identify and understand AI-generated content labels.
As we move towards a future where AI-generated content becomes more prevalent, how can we ensure that the average user remains informed and critical of the digital media they consume?
Image Credit: DALL-E