If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares Podcast Por  arte de portada

If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares

If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares

Escúchala gratis

Ver detalles del espectáculo

Obtén 3 meses por US$0.99 al mes

Technological development has always been a double-edged sword for humanity: the printing press increased the spread of misinformation, cars disrupted the fabric of our cities, and social media has made us increasingly polarized and lonely. But it has not been since the invention of the nuclear bomb that technology has presented such a severe existential risk to humanity – until now, with the possibility of Artificial Super Intelligence (ASI) on the horizon. Were ASI to come to fruition, it would be so powerful that it would outcompete human beings in everything – from scientific discovery to strategic warfare. What might happen to our species if we reach this point of singularity, and how can we steer away from the worst outcomes?

In this episode, Nate is joined by Nate Soares, an AI safety researcher and co-author of the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Together, they discuss many aspects of AI and ASI, including the dangerous unpredictability of continued ASI development, the "alignment problem," and the newest safety studies uncovering increasingly deceptive AI behavior. Soares also explores the need for global cooperation and oversight in AI development and the importance of public awareness and political action in addressing these existential risks.

How does ASI present an entirely different level of risk than the conventional artificial intelligence models that the public has already become accustomed to? Why do the leaders of the AI industry persist in their pursuits, despite acknowledging the extinction-level risks presented by continued ASI development? And will we be able to join together to create global guardrails against this shared threat, taking one small step toward a better future for humanity?

(Conversation recorded on November 11th, 2025)

About Nate Soares:

Nate Soares is the President of the Machine Intelligence Research Institute (MIRI), and plays a central role in setting MIRI's vision and strategy. Soares has been working in the field for over a decade, and is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs. Prior to MIRI, Soares worked as an engineer at Google and Microsoft, as a research associate at the National Institute of Standards and Technology, and as a contractor for the US Department of Defense.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Hylo channel and connect with other listeners

Todavía no hay opiniones