May 2024 | This Month in Generative AI: Embracing Technology and the Continued Rise of Fake Imagery

An AI-generated image of sharks swimming in a flooded highway.

News and trends shaping our understanding of generative AI technology and its applications.

Hardly a day goes by that I don't receive several emails or calls asking me to review a piece of content to determine if it is real or fake. Time permitting, I'm usually happy to oblige because I think it is important to help reporters, fact checkers, and the general public verify what is real and expose what is fake.

In this blog series as well as its earlier incarnation, I've spoken about a range of different analysis techniques that my team has developed and regularly uses to validate an audio file, image, or video. This type of reactive technology that works after a piece of content surfaces, alongside proactive tools like Content Credentials that work as a piece of content is being created or edited, helps us navigate a fast and complex online world where it is increasingly difficult to tell what is real and what is fake.

With OpenAI and other companies committing to labeling AI-generated content, consumers can soon look forward to Content Credentials showing up on LinkedIn, TikTok, and their other social media news feeds.

Over the past few weeks I have received emails from fact-checking organizations asking if I would analyze:

  1. An image of a shark on a flooded Los Angeles highway that Senator Ted Cruz shared on X.
  2. An audio recording claiming to be Michelle Obama announcing she is running for President in the 2024 election.
  3. As nearly a billion people head to the polls in India, dozens of videos of Indian politicians up and down the ticket saying all manner of offensive or highly improbable things.

At first I thought that these otherwise reliable fact checkers had lost their minds. Why would they possibly be asking me about these obviously absurd pieces of content? After a quick online search, however, I found that each item was gaining traction.

We have entered a world in which even the most absurd and unlikely claims are believed by a not insignificant number of people. A decade ago, around 4% of Americans believed in unfounded theories including the idea that intergalactic lizard species control society, and 7% believe the moon landing was faked. Starting in 2020, however, the belief in unfounded theories saw a startling rise.

This is a stunning and worrisome increase in the belief in unfounded conspiracies at the same time when the internet is supposed to be giving us unprecedented access to information and knowledge. But of course, while the internet has democratized the distribution of and access to information, it has made no distinction between good and bad information — and bad information seems to spread faster and further than good.

I am not saying that we should blindly believe everything we are told by the media, the government, or scientific experts. We should ask hard questions, consume information from a broad set of voices, be vigilant when confronted with day-to-day news, and be skeptical when confronted with particularly incredible claims. Whether you agree with them or not, reporters and fact checkers are, for the most part, serious people doing a serious job, and they are trying to get the story right.

So by all means, let's use technology to help us distinguish the real from the fake, but let's not let technology be a substitute for common sense. This past month has seen a continued rise in the quality and sophistication of generative AI images, audio, and video, so buckle up because we are going to have to deploy all of our common sense alongside the types of technologies that I will continue to discuss in the coming months.

Finally, I don't know why, but a constant over my past 25 years in the space of media forensics is that images with sharks are almost always fake, so much so that I'm not even sure any more if sharks are real.

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance.

Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.