Content Authenticity Initiative Presents: Video Provenance and the Ethics of Deepfakes

 

Malicious and pernicious deepfakes are just one kind of synthetic media; other valid, benign, and most notably, creative uses now also abound. The mainstream narrative that deepfakes are dangerous is only one part of the story. While fully aware of the potential consequences of bad actors, we looked to move past a one-sided perspective on the synthetic future and focused on the ethics, technology, and social shifts involved during our event “Video Provenance and the Ethics of Deepfakes” at the end of June. 

In his opening welcome, CAI Director Andy Parsons recognized new members including AFP, Camera Bits, Ernst Leitz Labs (a division of Leica), France Télévisions, Reface, Reynolds Journalism Institute, and The Washington Post. Membership and interest in our work continue to grow; see the full CAI member list on our newly relaunched website: contentauthenticity.org. Andy also announced that the C2PA standards organization will release a public draft of its technical specifications later this year. (Read more about the group’s founding here and news here.) These open standards will power the layer of objective facts on the internet that our technology and provenance work centers around — and will cover both still images and video. 

Panel moderator Nina Schick used this timely news to frame a broad conversation with VFX artist and Metaphysic.ai cofounder Chris Ume, Synthesia founder Victor Riparbelli, and video expert Vishy Swaminathan of Adobe Research. The group discussed the ethics involved in creation of synthetic video and how to shift public perception away from negative bias against synthetic forms of media. Nina provided a look at both the synthetic present and future to focus beyond the real and potent risk of malicious deepfakes onward to the fast-approaching ubiquity of synthetic media.

panel.png

The synthetic future will change the way we interact with all digital content and will be an AI-led paradigm change in content production, human communication, and information perception. Hyperrealistic AI-produced videos will be increasingly ubiquitous in our information ecosystem (think: panelist Chris Ume’s Deep Tom Cruise videos). Victor pushed this further. He predicts that by 2025, 95% of the video content we see online will be synthetically generated. This opens up infinite possibilities, and a democratization of content production limited only by our creativity and imagination. Instead of needing professional camera equipment and complex production tools, you’ll be able to create realistic video with only your personal computer and approachable software. It may be as easy to create photorealistic, synthetic video as it is to write an email.

Chris added that there is a misunderstanding of how easy it is to create these hyperrealistic videos today. His Tom Cruise work took months to create even with advanced technology and a talented actor. His new company, Metaphysic.ai, is building a set of AI tools so brands and creators can access synthetic media in a way that is simple, ethical, and responsible. Vishy reminded us that produced video has mixed natural and synthetic media for years. What used to be expensive post-production tools are now broadly available. There is a higher creative potential for the end user, but this can also reduce trust in videos as as a source of truth. This is where standardized measures around video authenticity can bolster trust while still enabling creativity.

This kicked off a lively discussion around the need for provenance solutions within video, (rather than detection methods alone.) Video is going through the same evolution process as photography; 99.9% of photos we see online have been edited in some way, so mere detection of alteration is not enough of a signal to fully understand a piece of content. Victor noted that a provenance solution goes deeper than detection. Much of the problematic content online now is actually a real video that has been misrepresented as being from a different time and place. Provenance can provide a scalable way to debunk “cheapfakes” which are at present a far more widespread problem than deepfakes. There was a wide-ranging Q&A session with attendee questions to close out the hour. As always, there was not nearly enough time to address all the excellent questions posed, but we’ll make sure areas of greatest interest are covered in future gatherings.

We rely on the voices and participation of our diverse community to ensure we’re addressing the most important topics and technologies in the world of digital content provenance. We ask that all CAI members and interested parties fill out the quick survey here: adobe.ly/CAIsurvey, which we’ll use to guide upcoming event programming and new member offerings. If you are not yet a member of the CAI, you can join us in our free, inclusive membership here. If you’re already a member, we encourage you to share the membership page with any interested colleagues. We welcome all to join our broad coalition of researchers, academics, hardware and software companies, publishers, journalists, and creators.