◄ Episode Description
Ruben Dieleman is a campaigner for the Existential Risk Observatory, an organization dedicated to reducing human existential risk by increasing public awareness about threats facing our civilization. In this interview, Ruben focuses primarily on artificial intelligence, discussing the AI alignment problem, defining key phrases used in AI debates, and explaining why there are so many differing perspectives on AI's risks and ethical development.
We explore how awareness affects outcomes, how to educate politicians and the public on complex issues like AI without causing confusion or dismissal. Ruben provides recommendations for newsletters, individuals, and organizations to follow to stay current on AI safety research and debates. He previews an upcoming summit on AI safety that the Existential Risk Observatory is hosting, indicating it will be an important milestone in bringing more political leaders into the conversation.
Overall this is an essential listen for anyone looking to enhance their understanding of AI risk, the dynamics of the AI safety community, and how civil society organizations are working to raise awareness on issues relating to human existential risk.
◄ Episode Timestamps
(00:00:00) Introduction
(00:02:00) The mission of the Existential Risk Observatory
(00:03:24) Where the 1 in 6 existential risk statistic comes from
(00:05:07) Defining existential risk
(00:07:33) Explaining unaligned AI and the alignment problem
(00:09:16) Moving away from the concept of AI alignment
(00:10:56) New concepts like scalable/responsible AI
(00:12:25) Calls for a moratorium on certain kinds of AI development
(00:13:34) Game theory dynamics around calls for AI pauses
(00:15:08) Key risks posed by artificial superintelligence
(00:16:46) Informing the general public without inducing dismissiveness
(00:18:45) AI and the future of human employment
(00:21:01) The upcoming AI Safety Summit and what it signifies
(00:23:40) Keeping abreast of AI developments and debates
(00:26:43) Communicating AI risks to politicians and the general public
(00:29:37) Government regulation and oversight of AI development
(00:31:43) Hopes for initiatives like an AI atomic agency
(00:33:15) Resources for staying current on AI safety topics
(00:35:31) How to follow the Existential Risk Observatory's work
◄ Episode Topic Score
Culture (8)
Design (7)
Education (9)
Environment (4)
Science (6)
Technology (10)
◄ Additional Episode Resources
Existential Risk Observatory: https://www.existentialriskobservatory.org/
Ruben’s Twitter: https://twitter.com/RBNDLM
AI Summit Talk Recording: https://www.youtube.com/watch?v=n3LIKX13V60
◄ Ruben’s ultimate AI newsletter recommendations:
Existential Risk Observatory Newsletter: https://xriskobservatory.substack.com/
Navigating AI Risks: https://www.navigatingrisks.ai/
Second Best: https://www.secondbest.ca/
The EU AI Act Newsletter: https://artificialintelligenceact.substack.com/
AGI Safety Weekly: https://safety.blog/
Marcus On AI: https://garymarcus.substack.com/
AI Policy Perspectives: https://aipolicyperspectives.substack.com/
AI Safety Newsletter: https://newsletter.safe.ai/
Understanding AI: https://www.understandingai.org/