Durable Content Credentials

by Andy Parsons, Sr. Director, Content Authenticity Initiative

Faced with a newly chaotic media landscape made up of generative AI and other heavily manipulated content, alongside authentic real photographs, video, and audio it is becoming increasingly difficult to know what to trust.  

Understanding the origins of digital media and if/how it was manipulated - as well as sharing that information with the consumer is now possible through Content Credentials, the global open technical specification developed by the C2PA, a consortium of over 100 companies working together within the Linux Foundation. 

Implementation of Content Credentials is on the rise with in-product support released and soon-to-be released by Adobe, Open AI, Meta, Google, Sony, Leica, Microsoft, Truepic, and many other companies.  

As this technology becomes increasingly commonplace, we’re seeing criticism circulating that relying solely on Content Credentials’ secure metadata, or solely on invisible watermarking to label generative AI content, may not be sufficient to prevent the spread of misinformation. 

To be clear, we agree. 

That is why, since its founding in 2021, the C2PA has been hard at work creating a robust and secure open standard in Content Credentials. While the standard focuses on a new kind of “signed” metadata, it also specifies measures to make the metadata durable, or able to persist in the face of screenshots and rebroadcast attacks. 

Content Credentials are sometimes confusingly described as a type of watermark, but watermarking has a specific meaning in this context and is only one piece in the three-pronged approach represented by Content Credentials. Let’s clarify all of this. 

The promise of Content Credentials is that they can combine secure metadata, undetectable watermarks, and content fingerprinting to offer the most comprehensive solution available for expressing content provenance for audio, video, and images.

  • Secure metadata: This is verifiable information about how content was made that is baked into the content itself, in a way that cannot be altered without leaving evidence of alteration. A Content Credential can tell us about the provenance of any media or composite. It can tell us whether a video, image, or sound file was created with AI or captured in the real world with a device like a camera or audio recorder. Because Content Credentials are designed to be chained together, they can indicate how content may have been altered, what content was combined to produce the final content, and even what device or software was involved in each stage of production. The various provenance bits can be combined in ways that preserve privacy and enable creators, fact checkers, and information consumers to decide what’s trustworthy, what’s not, and what may be satirical or purely creative.   

  • Watermarking: This term is often used in a generic way to refer to data that is permanently attached to content and hard or impossible to remove. For our purposes here, I specifically refer to watermarking as a kind of hidden information that is not detectable by humans. It embeds a small amount of information in content that can be decoded using a watermark detector. State-of-the-art watermarks can be impervious to alterations such as the cropping or rotating of images or the addition of noise to video and audio. Importantly, the strength of a watermark is that it can survive rebroadcasting efforts like screenshotting, pictures of pictures, or re-recording of media, which effectively remove secure metadata.

  • Fingerprinting: This is a way to create a unique code based on pixels, frames, or audio waveforms that can be computed and matched against other instances of the same content, even if there has been some degree of alteration. Think of the way your favorite music-matching service works, locating a specific song from an audio sample you provide. The fingerprint can be stored separately from the content as part of the Content Credential. When someone encounters the content, the fingerprint can be re-computed on the fly and matched against a database of Content Credentials and its associated stored fingerprints. The advantage of this technique is it does not require the embedding of any information in the media itself. It is immune to information removal because there is no information to remove.

So, we have three techniques that can be used to inform consumers about how media came to be. If each of these techniques were robust enough to ensure the availability of rich provenance no matter where the content goes, we would have a versatile set of measures, each of which could be applied where optimal and as appropriate.  

However, none of these techniques is durable enough in isolation to be effective on its own. Consider: 

  • Even if Content Credentials metadata cannot be tampered with without detection, metadata of any kind can be removed deliberately or accidentally. 

  • Watermarking is limited by the amount data that can be encoded without visibly or audibly degrading the content, and even then, watermarks can be removed or spoofed. 

  • Fingerprint retrieval is fuzzy. Matches cannot be made with perfect certainty, meaning that while useful as a perceptual check, they are not exact enough to ensure that a fingerprint matches stored provenance with full confidence. 

But combined into a single approach, the three form a unified solution that is robust and secure enough to ensure that reliable provenance information is available no matter where a piece of content goes. This single, harmonized approach is the essence of durable Content Credentials.  

Here is a deeper dive into how C2PA metadata, watermarks, and fingerprints are bound to the content to achieve permanent, immutable provenance. The thoughtful combination of these techniques leverages the strengths of each to mitigate the shortcomings of the others.  

A simple comparison of the components of durable Content Credentials, and their strength in combination.

Let’s look at how this works. First, the content is watermarked using a mode-specific technique purpose-built for audio, video, or images. Since a watermark can only contain an extremely limited amount of data, it is important to make the most of the bandwidth it affords. We therefore encode a short identifier and an indicator of where the C2PA manifest, or the signed metadata, can be retrieved. This could be a Content Credentials cloud host or a distributed ledger/blockchain. 

Next, we compute a fingerprint of the media, essentially another short numerical descriptor. The descriptor represents a perceptual key that can be used later to match the content to its Content Credentials, albeit in an inexact way as described earlier. 

Then, the identifier in the watermark and the fingerprint are added to the Content Credential, which already includes data pertaining to the origin of the content and the ingredients and tools that were used to make it. Now we digitally sign the entire package, so that it is uniquely connected to this content and tamper evident. And finally, the Content Credential is injected into the content and stored remotely. And just like that, in a few milliseconds, we have created a durable Content Credential. 

When a consumer of this media wishes to check the provenance, the process is reversed. If the provenance and content are intact, we need only verify the signed manifest and display the data. However, if the metadata has been removed, we make use of durability as follows: 

  1. Decode the watermark, retrieving the identifier it stores. 

  2. Use the identifier to look up the stored Content Credential on the appropriate Content Credentials cloud or distributed ledger. 

  3. Check that the manifest and the content match by using the fingerprint to verify that there is a perceptual match, and the watermark has not been spoofed or incorrectly decoded. 

  4. Verify the cryptographic integrity of the manifest and its provenance data. 

Again, within a few milliseconds we can fetch and verify information about how this content was made, even if the metadata was removed maliciously or accidentally. 

This approach to durability is not appropriate for every use case. For example, if a photojournalist wishes to focus primarily on privacy, they may not wish to store anything related to their photos and videos on any server or blockchain. Instead, they would ensure that the chain of custody between the camera and the publisher is carefully maintained so that provenance is kept connected and intact, but not stored remotely. 

However, in many cases, durable Content Credentials provide an essential balance between performance and permanence. And although technology providers are just beginning to implement the durability approach now, this idea is nothing new. The C2PA specification has always had the facility under its affordances for “soft bindings.”  

We recognize that although Content Credentials are an important part of the ultimate solution to help address the problem of deepfakes, they are not a silver bullet. For the Content Credentials solution to work, we need it everywhere — across devices and platforms — and we need to invest in education so people can be on the lookout for Content Credentials, feeling empowered to interpret the trust signals of provenance while maintaining a healthy skepticism toward what they see and hear online.  

Malicious parties will always find novel ways to exploit technology like generative AI for deceptive purposes. Content Credentials can be a crucial tool for good actors to prove the authenticity of their content, providing consumers with a verifiable means to differentiate fact from fiction.  

As the adoption of Content Credentials increases and availability grows quickly across news, social media, and creative outlets, durable Content Credentials will become as expected as secure connections in web browsers. Content without provenance will become the exception, provenance with privacy preservation will be a norm, and durability will ensure that everyone has the fundamental right to understand what content is and how it was made. 

Next
Next

March 2024 | This Month in Generative AI: Text-to-Movie