Publications


Papers / Reports

Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Report by Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K. Wei, et al (2023) for the Centre for the Governance of AI.

Democratizing AI: Multiple Meanings, Methods, and Goals
AIES 2023 conference paper by Elizabeth Seger, Aviv Ovadya, Ben Garfinkel, Divya Siddarth, and Allan Dafoe (2023)

In Defence of Principlism in AI Ethics and Governance
Paper by Elizabeth Seger in Philosophy & Technology (2022)

Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world.
Report for The Alan Turing Institute by Elizabeth Seger, Shahar Avin, Gavin Pearson, Mark Briers, Seán Ó hÉigeartaigh, Helena Bacon (2020)

Toward Trustworthy AI: Mechanisms for supporting verifiable claims
Report by M. Brundage et al. incl. Allan Dafoe, Markus Anderljung, Jade Leung, Elizabeth Seger (2020)

Should Epistemic Security Be a Priority GCR Cause Area?
Paper by Elizabeth Seger in Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risk Conference (2023)


Posts / Op-eds

What do we mean when we talk about “AI Democratisation”?
Research blog post by Elizabeth Seger for the Centre for the Governance of AI (Feb 2023)

Exploring epistemic security: The catastrophic risk of epistemic insecurity in a technologically advanced world
Article by Elizabeth Seger in the International Security Journal (2022)

The greatest security threat of the post-truth age
Article by Elizabeth Seger in BBC Future (2021)

Contact

LindedIn
Twitter: @ea_seger