February 2024 | This Month in Generative AI: Election Season

AdobeStock_131791279.jpeg
Adobe Stock

 

News and trends shaping our understanding of generative AI technology and its applications.

 

In May of 2019, a manipulated video of House Speaker Nancy Pelosi purportedly slurring her words in a public speech racked up over 2.5 million views on Facebook. Although the video was widely reported to be a deepfake, it was what we would today call a “cheap fake.” The original video of Speaker Pelosi was simply slowed down to make her sound inebriated — no AI needed. The cheap fake was, however, a harbinger.

Around 2 billion citizens will vote this year in some 70 elections around the globe. At the same time, generative AI has emerged as a powerful technology that can entertain, defraud, and deceive.

Today, nearly anyone can use generative AI to create hyper-realistic images from only a text prompt, clone a person's voice from a 30-second recording, or modify a video to make the speaker say things they never did or would say. Perhaps not surprisingly, generative AI is finding its way into everything from local to national and international politics. Some of these applications are used to bolster a candidate, but many are designed to be harmful to a candidate or party, and all applications raise new and complex questions.

Trying to help

In October of last year, New York City Mayor Eric Adams used generative AI to make robocalls in which he spoke Mandarin and Yiddish. (Adams only speaks English.) The calls did not disclose that the voice was AI-generated, and at least some New Yorkers believe that Adams is multilingual: "People stop me on the street all the time and say, ‘I didn’t know you speak Mandarin,’" Adams said. While the content of the calls was not deceptive, some claimed that the calls themselves were deceptive and an unethical use of AI.

Not to be outdone, earlier this year Representative Dean Phillips deployed a full-blown OpenAI-powered interactive chatbot to bolster his long-shot bid for the Democratic nomination in the upcoming presidential primary. The chatbot disclosed that it was an AI-bot and allowed voters to ask questions and hear an AI-generated response in an AI-generated version of Phillips's voice. Because this bot violated OpenAI's terms of service, it was eventually taken offline.

Trying to harm

In October of last year, Slovakia — a country that shares part of its eastern border with Ukraine — saw a last-minute and dramatic shift in its presidential election. Just 48 hours before election day, the pro-NATO and Western-aligned candidate Michal Šimečka was leading in the polls by some four points. A fake audio of Šimečka seeming to claim that he was going to rig the election spread quickly online, and two days later the pro-Moscow candidate Robert Fico won the presidential election by five points. It is impossible to say exactly how much the audio impacted the election outcome, but this incident raised concerns about the use of AI in campaigns.

Fast-forward to January of this month when the state of New Hampshire was holding the nation's first primary for the 2024 US presidential election. On the eve of the primary, more than 20,000 New Hampshire residents received robocalls impersonating President Biden. The call urged voters not to vote in the primary and to "save your vote for the November election." It took two weeks before New Hampshire’s Attorney General announced that his office identified two businesses behind these robocalls. 

The past few months have also seen an increasing number of viral images making the rounds on social media. These range from faked images of Trump with convicted child sex trafficker Jeffrey Epstein and a young girl, to faked images of Biden in military fatigues on the verge of authorizing military strikes. 

On the video front, it is becoming increasingly easier to combine fake audio with video to make people say and do things they never did. For example, a speech originally given by Vice President Harris on April 25, 2023, at Howard University was digitally altered to replace the voice track with a seemingly inebriated and rambling Harris.

And these are just a few examples of the politically motivated deepfakes that we have already started to see as the US national election heats up. In the coming months, I'll be keeping track of these examples as they continue to emerge.

Something in between

In the lead up to their election earlier in February, a once-feared army general, who ruled Indonesia with an iron fist for more than three decades, was AI resurrected with a message for voters. And, in India, former Dravida Munnetra Kazhagam – deceased since 2018 – was AI resurrected with an endorsement for his son, the sitting head of the state of Bengaluru. I expect this type of virtual endorsement will become an (ethically complex) trend.

Looking ahead

There are two primary approaches to authenticating digital media. Reactive techniques analyze various aspects of an image or video for traces of implausible or inconsistent properties. Learn more about these photo forensics techniques in my series for the CAI. Proactive techniques, on the other hand, operate at the source of content creation, embedding into or extracting from an image or video an identifying digital watermark or signature. 

Although not perfect, these combined reactive and proactive technologies will make it harder (but not impossible) to create a compelling fake and easier to verify the integrity of real content. The creation and detection of manipulated media, however, is inherently adversarial. Both sides will continually adapt, making distinguishing the real from the fake an ongoing challenge.

While it is relatively straightforward to regulate AI-powered non-consensual sexual imagery, child abuse imagery, and content designed to defraud, regulating political speech is more fraught. We, of course, want to give a wide berth for political discourse, but there should be limits on activities like those we saw in New Hampshire, where bad actors attempt to interfere with our voting rights. 

As a first step, following the New Hampshire AI-powered robocalls, the Federal Communications Commission quickly announced a ban on AI-powered robocalls. While the ruling is fairly narrow and doesn't address the wider issue of AI-powered election interference or non-AI-powered interference, it is a reasonable precaution as we all try to sort out this brave new world where anybody's voice or likeness can be manipulated.

As we continue to wrestle with these complex questions, we as consumers have to be particularly vigilant as we enter what is sure to be a highly contentious election season. We should be vigilant not to fall for disinformation just because it conforms to our personal views, we should be vigilant not to be part of the problem by spreading disinformation, and we should be vigilant to protect our and others' rights (even if we disagree with them) to participate in our democracy.

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance.

Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.