The Content Authenticity Initiative Summit: Collaborating to Drive Trust and Transparency Online

CAI-Summit-e1581373844260-1800x0-c-default.jpg
By Dana Rao, Executive Vice President, General Counsel and Corporate Secretary

 

 

As new technologies enable the creation of digital media, we are seeing a proliferation of altered content online, including those created to intentionally mislead and deceive. In the face of this growing issue, a critical question has emerged: how can we empower consumers to better recognize and evaluate digital content that has been altered?

On January 23, the Content Authenticity Initiative (CAI), developed by AdobeThe New York Times Company and Twitter, hosted a kick-off summit at Adobe Headquarters in San Jose to explore this complex issue and advance the mission to develop an industry standard for digital content attribution.

The summit convened an interdisciplinary group of nearly 100 industry leaders from tech, media, academia, advocacy groups and think tanks for a full day of lightning talks and roundtables. While the group explored a range of different approaches and possible solutions, including product development, governance and industry standards, three key themes were consistent across the discussions:

  • Consumer education is key. Protecting people from deceptive content is ultimately at the heart of CAI’s mission. As such, consumer education must be at the center of our efforts. We need to work to empower people with information, resources and tools to approach and evaluate digital media with a more informed point of view. Ultimately, we want consumers to better understand who and what to trust online. We are discussing what that education could and should look like, and initial ideas include incorporating a diverse set of online and real-world activations in order to reach a broad array of audiences.
  • Intent matters. What sometimes gets lost in the dialogue about trust and transparency online is that altered content itself is not inherently bad. Video and image editing are creative art forms that enrich movies, online videos and advertising. The potential for harm lies in the creator’s intent – when someone edits content specifically to deceive or misinform. The CAI must take this into account and ensure that consumers understand this important nuance. We must not paint all altered content with the same broad stroke and must uphold and protect the dignity of creative artists and their work.
  • Unintended consequences must be mitigated. As the CAI works to develop an industry standard for digital content attribution, we need to work to mitigate the misuse of our solution, such as invalidating the work of photojournalists that rely on anonymity to carry out important work. Adobe’s technical teams are designing the attribution tool with this in mind, but we will also continue engaging with advocacy groups and government organizations to ensure our framework protects digital artists, photographers and consumers alike.

 

The summit was an important first step in our long-term effort to address the issue of content authenticity at scale. We took the first steps toward establishing standards with cross-industry participation and are actively working to unite perspectives through the development of working groups. We will be sharing the model and process in the coming weeks.

In parallel, Adobe is continuing development work on an open, extensible attribution solution which will be integrated as a product feature. Our goal is to help provide transparency to consumers so that they can better evaluate the content they are interacting with online.

As we continue our journey, we hope to continue adding new partners and benefitting from the diverse perspectives, insights and technical expertise of the many stakeholders already involved. A shared commitment to the CAI’s mission inspires and reinforces our efforts, and we look forward to working together to drive more trust and transparency online.

Special thanks to Will Allen for his contributions to this blog post.