OpenAI will use tamper-resistant watermarking to help users identify deepfakes and AI-generated content

AI watermarking
Warning citing AI generated content (Image credit: Image Creator from Designer | Windows Central)

What you need to know

  • OpenAI recently announced its plan to develop new tools to help identify AI-generated content using its tools, including tamper-resistant watermarking.
  • The ChatGPT maker is teaming up with Microsoft to launch a $2 million societal resilience fund to help drive the adoption and understanding of provenance standards.
  • Applications for early access to OpenAI's image detection classifier to our first group of testers are open through its Researcher Access Program.

With the prevalence of sophisticated generative AI tools like Image Creator by Designer (formerly Bing Image Creator), Midjourney, and ChatGPT, it's increasingly difficult to distinguish real and AI-generated content. Major tech corporations like OpenAI and Microsoft have made significant strides toward making it easier for users to identify AI-generated content.

OpenAI started watermarking images generated using DALL-E 3 and ChatGPT, but the company admits it's "not a silver bullet to address issues of provenance." As we forge toward the forthcoming US Presidential elections, AI deepfakes and misinformation continue to flood the internet

Recently, the ChatGPT maker highlighted two ways it's trying to address the emerging challenges as generative AI becomes broadly available. First, the company is developing new tools to help users identify AI-generated content, including tamper-resistant watermarking. The company is also integrating audio watermarking into Voice Engine for easy identification. 

It also plans to adopt and develop an "open standard that can help people verify the tools used for creating or editing many kinds of digital content."

The ChatGPT maker recently joined the Steering Committee of C2PA – the Coalition for Content Provenance and Authenticity. For context, C2PA is a digital content certification broadly used to identify the source, making it easy to determine if it's AI-generated.

As highlighted above, OpenAI adds C2PA metadata to all images using DALL-E 3 and ChatGPT. OpenAI plans to apply the same changes to its flagship video generation tool, Sora when it ships to general availability. The company admits users can still leverage AI tools to create deceptive content without the metadata, but it is hard to fake or alter the information.

As adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse. Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.

OpenAI

OpenAI is joining Microsoft to launch a $2 million societal resilience fund to help drive the adoption and understanding of provenance standards, including C2PA. 

Finally, OpenAI has indicated that applications for early access to its image detection classifier to our first group of testers are open through its Researcher Access Program. The tool will help users predict the likelihood that an image was generated using DALL-E 3 technology

TOPICS
CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.