OpenAI & C2PA

februari 7, 2024

Origin determination

Fake news is all around us nowadays. I’m not the only one that once believed a story just because it was posted on X or even on a reliable news site before it was condemned as fake. Some people deliberitely spread fake news, and some people mistake opinions as facts. With the increasing popularity of AI there is now a new threat among us; fake images!

A year ago a photo of the pope dressed in a puffy (Balanciaga-like) bomberjack went viral on X. It certainly wasn’t the dress code we were used from the Vatican, but people spread the image as if it was real. Obviously, it was a fake and generated image created with AI. 

We can fact-check news, but how do we check the credibility of pictures?

OpenAI & C2PA

On February 6th OpenAI announced that it will incorporate the C2PA standard into their DALLE-3 generated images. This is done by ‘injecting’ metadata into the generated image. All images already contain certain metadata. Some cameras even put their brand, photo orientation, or even location into the created image. But this wasn’t necessarily the case yet with DALLE-3-generated images via OpenAI. 

Now that OpenAI inserts some metadata into the images it will be easier to check their credibility. However, it is not a waterproof concept. People (or programs) can easily remove the metadata. But; it’s a step in the right direction. Something needs to be done, but it just needs to be improved. Eventually, I believe images will contain an invisible watermark that is imprinted within the visual image. This way the image is impossible to be edited, or for the watermark to be removed. This does mean a large database of patterns will need to be created and this raises certain privacy-related questions. 

What is C2PA

The Coalition for Content Provenance and Authenticity, or C2PA for short, is a group that’s focused on making it easier to tell if digital content like photos, videos, and documents are real or have been changed. They work on creating standards that help show where digital content comes from and how it has been altered along the way. This is important today because there’s a lot of fake content out there, and it can be hard to tell what’s true and what’s not.

C2PA is made up of big companies and organizations from different fields, including tech giants, media companies, and groups that care about making sure information is reliable. They’re working together to build technology that can attach information to digital content, like a digital fingerprint, that tells you its history. This can help everyone, from news organizations to everyday people, feel more confident about the content they see online. In my opinion, it’s a promising step towards fighting fake news and digital misinformation.

Content Credentials

To verify the origin of images we can use Content Credentials. Uploading an image will show you the metadata, including the metadata inserted by OpenAI. To check this out I have asked ChatGPT to generate an image of a rabbit. I then uploaded the image to Content Credentials to see the metadata. As described in the introduction of this post, it’s possible to remove this metadata. But; it’s something! 

Conclusion

There’s still a lot to improve regarding origin determination. AI Laws need to be implemented, and tools need to be created to test for credibility (visually and non-visually). But I’m glad progress is being made!

Example of Content Credential's output of a ChatGPT generated image
Posted in Artificial IntelligenceTags:
Related Posts

This blog post explores how GitHub Copilot’s custom instructions can be used to tailor AI-generated code suggestions for specific tasks. By creating a markdown file in a .github folder, developers can guide Copilot’s behavior to follow detailed steps, like transforming SQL CREATE TABLE statements to ALTER TABLE. This approach streamlines repetitive tasks, enforces coding standards, and ensures Copilot adapts to specific workflows, making it a versatile tool for automating various development challenges.