Event Recap | AI and the Future of Knowledge

Recently, we hosted a conversation with Henry Ajder, CAI advisor and founder of Latent Space Advisory, Elizabeth Seger, Associate Director of Digital Policy at Demos, and Andy Dudfield, Head of AI at Full Fact for a critical conversation on epistemic risk, how content provenance technology plays a role in mitigating information decay, and how societies can adapt to maintain epistemic security in a time of increasingly complex information threats.
Seger defined epistemic risk as the umbrella term for threats to how information is produced, disseminated, modified, accessed, shared, and used to make decisions. She spoke about how our knowledge is increasingly technologically mediated, while public understanding of how that technology works is shrinking, making society more vulnerable to breakdowns in information flow.
Dudfield spoke about AI’s recent shift from a theoretical fact-checking threat to a core part of the landscape of bad information. He described evaluations for the fact-checking process based on level of harm, the phenomenon of users placing undue trust in AI to confirm the validity of information, and the ramifications of a public that can accept “accurate enough” for an answer.
Both guests stressed the importance of solutions like content provenance technology to rebuild trust in authoritative voices, building up good information ahead of crises, and encouraged public involvement in defending knowledge.
“We’re all responsible,” said Seger. “The crisis of epistemic security is death by a thousand cuts. Death by a thousand cuts will require a thousand bandages.”
Watch the event in full below:
Not a member of the CAI yet? Join us.