Event Recap | Authenticity in the Age of AI Slop

The ease with which anyone can use generative AI tools has led to a flood of low-quality content, colloquially known as “AI slop.” It’s become inescapable on our social media feeds, streaming services, online retailers, and other parts of life online.
Because of the many different forms and channels in which AI slop can proliferate, the phenomenon is farther-reaching than its older sibling, spam, and brings with it a greater variety of harms. AI-generated work is overwhelming artist platforms and displacing human creators from the only viable spaces in which they can make their living. Producers are mass-generating educational books for children containing inaccurate information. In newsfeeds cluttered with low quality and shallow content, information that is trustworthy and valuable is increasingly hard to find, or dismissed as fake when viewers can’t tell the difference.
Recently, we hosted a conversation with experts on the motivations behind AI slop production, its impacts, and the value of content provenance in addressing these challenges.
Henry Ajder, CAI Advisor and Founder at Latent Space Advisory, was our host for this event. He began by describing the “AI slopocalypse,” a term that refers to the growing concern around AI slop overwhelming the information ecosystem. He moderated a group discussion with our three guests about whether digital slop may eventually evolve to “haute cuisine,” and the role of provenance in addressing its challenges.
Alexios Mantzarlis, Director of the Security, Trust, and Safety (SETS) initiative at Cornell Tech, likened AI slop to “unappetizing gruel” that fills our feeds, but leaves us feeling empty. He went on to classify AI slop based on purpose (social/political vs. economic) and nature (expressive vs. deceptive), highlighting examples such as absurdist engagement bait and the use of slop as a top-funnel lure for larger scam operations.
Bilva Chandra, AI Ethics and Safety Manager at Google DeepMind, discussed safety and design considerations based on various points of intervention. She described risk mitigation by severity and likelihood, pre- and post-generation interventions, and spoke about Synth ID, Google DeepMind’s watermarking technology for identifying AI-generated content.
Siddarth Venkataramakrishnan, Analyst and Editorial Manager at the Institute for Strategic Dialogue (ISD), discussed the economic and political incentives behind AI slop, from ad revenue schemes to propaganda targeting geopolitical conflicts. He warned of the blurred lines between real and synthetic media, and the indifference about distinguishing between them, being weaponized for disinformation campaigns.
Watch the event in full below:
CAI members can access all of our virtual programming live and in full, as well as receive invites to members-only events. Not a member of the CAI yet? Join us.