Epistemic Security


“If home security is about making sure our possessions are safe, financial security is about keeping our money safe, national security is about keeping our country safe, then epistemic security is about keeping our knowledge safe.” – BBC Future

Project Overview

Citizens of technologically rich societies have greater access to information than at any point in history, however information abundance and other changes brought about by new technologies highlight a new set of threats and vulnerabilities in our systems of information production and exchange.

Epistemic security concerns the protection and improvement of the epistemic processes by which information is produced and processed (disseminated, consumed, interpreted, re-packaged, shared, etc.) and used to inform beliefs and decision-making procedures in society. 

In an ideally epistemically secure society epistemic processes are robust to detrimental interferences called epistemic threats. Epistemic threats include, for example, disinformation campaigns, the erosion of trust in expertise and in knowledge producing and distributing institutions, the formation of insular communities, knowledge loss, media censorship, the suppression of diverse viewpoints, extreme polarization, disruptive information mediating technologies, persuasive AI, maliciously or carelessly used generative AI, and so forth.

The Epistemic Security project aims to bring together a diverse community of researchers to investigate how information producing and mediating technologies (like social media platform and AI-enabled image and text generation systems) impact how knowledge is produced, shared, and consumed in modern society. Of particular concern is how these technologies impact our ability to make well-informed decisions and to coordinate effective action in time of crises. 

The project is run in collaboration between the Centre for the Governance of AI (GovAI) and the Cambridge Centre for the Study of Existential Risk (CSER).

Publications

Tackling Threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world
Report by Elizabeth Seger, Shahar Avin, Gavin Pearson, Mark Briers, Seán Ó hÉigeartaigh, Helena Bacon (October 2020)

Should Epistemic Security Be a Priority GCR Cause Area?
Paper by Elizabeth Seger in Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risk Conference (2023)

OpenAI, LLMs, Influence Operations & Epistemic Security: New Report Overview
Blogpost by Elizabeth Seger, Giulio Corsi, Aviv Ovadya, Shahar Avin (17 January, 2023)

The catastrophic risk of insecure information ecosystems in a technologically advanced world
Article by Elizabeth Seger (December 2022)

The greatest security threat of the post-truth age
Article by Elizabeth Seger (February 2021)