Episodios

  • The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
    Dec 6 2025

    In this episode of For Humanity, John Sherman sits down with Congressman Bill Foster — the only PhD scientist in Congress, a former Fermilab physicist, and one of the few lawmakers deeply engaged with advanced AI risks. Together, they dive into a wide-ranging conversation about the accelerating capabilities of AI, the systemic vulnerabilities inside Congress, and why the next few years may determine the fate of our species.

    Foster unpacks why AI risk mirrors nuclear risk in scale, how interpretability is collapsing as models evolve, why Congress is structurally incapable of responding fast enough, and how geopolitical pressures distort every conversation on safety. They also explore the looming financial bubble around AI, the coming energy crunch from massive data centers, and the emerging threat of anonymous encrypted compute — a pathway that could enable rogue actors or rogue AIs to operate undetected.

    If you want a deeper understanding of how AI intersects with power, geopolitics, compute, regulation, and existential risk, this conversation is essential.

    Together, they explore:

    * • The real risks emerging from today’s AI systems — and what’s coming next

    * Why Congress is unprepared for AGI-level threats

    * How compute verification could become humanity’s safety net

    * Why data centers may reshape energy, economics, and local politics

    * How scientific literacy in government could redefine AI governance

    👉 Follow More of Congressman Foster’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 10 m
  • AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74
    Nov 22 2025

    In this episode of For Humanity, John sits down with Liv Boeree — poker champion, systems thinker, and longtime AI risk advocate — for a candid conversation about where we truly stand in the race toward advanced AI. Liv breaks down why public understanding of superintelligence is so uneven, how misaligned incentives shape the entire ecosystem, and why issues like surveillance, culture, and gender dynamics matter more than people realize.

    They explore the emotional realities of working on existential risk, the impact of doomscrolling, and how mindset and intuition keep people grounded in such turbulent times. The result is a clear, grounded, and surprisingly hopeful look at the future of technology, power, and responsibility. If you’re passionate about understanding AI’s real impacts (today and tomorrow), this is a must-watch.

    Together, they explore:

    * The real risks we face from AI — today and in the coming years

    * Why public understanding of superintelligence is so fractured

    * How incentives, competition, and culture misalign technology with human flourishing

    * What poker teaches us about deception, risk, and reading motives

    * The role of women, intuition, and “mama bear energy” in the AI safety movement

    👉 Follow More of Liv Boeree’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 18 m
  • AI Safety on the Frontlines | For Humanity #73
    Nov 8 2025

    In this episode of For Humanity, host John Sherman speaks with Esben Kran, one of the leading figures in the for-profit AI safety movement, joining live from Ukraine — where he’s exploring the intersection of AI safety, autonomous drones, and the defense tech boom.

    🔎 They discuss:

    * The rise of for-profit AI safety startups and why technology must lead regulation.

    * How Ukraine’s drone industry became the frontline of autonomous warfare.

    * What happens when AI gains control — and how we might still shut it down.

    * The chilling concept of a global “AI kill chain” and what humanity must do now.

    Esben also shares insights from companies like Lucid Computing and Workshop Labs, the growing global coordination challenges, and why the next AI safety breakthroughs may not come from labs in Berkeley — but from battlefields and builders abroad.

    🔗 Subscribe for more conversations about AI risk, ethics, and the fight to build a safe future for humanity.

    📺 Watch more episodes

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    56 m
  • Stuart Russell: “AI CEO Told Me Chernobyl-Level AI Event Might Be Our Only Hope” | For Humanity #72
    Oct 25 2025

    Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.

    Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.

    They discuss:

    * Why even AI insiders are losing faith in control

    * What a “Chernobyl moment” could actually look like

    * Why regulation isn’t anti-innovation — it’s survival

    * The myth that America is “allergic” to AI rules

    * How liability, accountability, and provable safety could still save us

    * Whether we can ever truly coexist with a superintelligence

    This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.

    🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.

    📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.

    #AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 33 m
  • The RAISE Act: Regulating Frontier AI | For Humanity | EP 71
    Oct 11 2025

    In this episode of For Humanity, John speaks with New York Assemblymember Alex Bores, sponsor of the groundbreaking RAISE Act, one of the first state-level bills in the U.S. designed to regulate frontier AI systems.

    They discuss:

    * Why AI poses an existential risk, with researchers estimating up to a 10% chance of extinction.

    * The political challenges of passing meaningful AI regulation at the state and federal level.

    * How the RAISE Act could require safety plans, transparency, and limits on catastrophic risks.

    * The looming jobs crisis as AI accelerates disruption across industries.

    * Why politicians are only beginning to grapple with AI’s dangers — and why the public must speak up now.

    This is a candid, urgent conversation about AI risk, regulation, and what it will take to secure humanity’s future.

    📌 Learn more about the RAISE Act.

    👉 Subscribe for more conversations on AI risk and the future of humanity.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 4 m
  • Young Voices on AI Risk: Jobs, Community & the Fight for Our Future | FHP Ep. 70
    Sep 27 2025

    What happens when AI collides with the next generation? In this episode of For Humanity #70 — Young People vs. Advancing AI, host John Sherman sits down with Emma Corbett, Ava Smithing, and Sam Heiner from the Young People’s Alliance to explore how artificial intelligence is already shaping the lives of students and young leaders.

    From classrooms to job applications to AI “companions,” the next generation is facing challenges that older policymakers often don’t even see. This episode digs into what young people really think about AI—and why their voices are critical in the fight for a safe and human future.

    In this episode we cover:

    * Students’ on-the-ground views of AI in education and daily life

    * How AI is fueling job loss, hiring barriers, and rising anxiety about the future

    * The hidden dangers of AI companions and the erosion of real community

    * Why young people feel abandoned by “adults in the room”

    * The path from existential dread → civic action → hope

    🎯 Why watch?

    Because if AI defines the future, young people will inherit it first. Their voices, fears, and leadership could decide whether AI remains a tool—or becomes an existential threat.

    👉 Subscribe for more conversations on AI, humanity, and the choices that will shape our future.

    #AI #AIsafety #ForHumanityPodcast #YoungPeople #FutureofWork



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 7 m
  • Big Tech Under Pressure: Hunger Strikes and the Fight for AI Safety | For Humanity EP69
    Sep 10 2025

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.

    TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/act

    In Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.

    Michael and Dennis, two AI safety advocates, join John from outside DeepMind’s London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido’s protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.

    This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure.

    In this conversation, you’ll discover:

    * Why hunger strikers believe urgent action on AI safety is necessary

    * How Big Tech companies are responding to growing public concern

    * The role of parents, workers, and communities in shaping AI policy

    * Parallels with past social movements that drove real change

    * Practical ways you can make your voice heard in the AI safety conversation

    This isn’t just about technology—it’s about responsibility, leadership, and the choices we make for future generations.

    🔗 Key Links

    👉 AI Pause Petition: https://safe.ai/act

    👉 Follow the movement on X: https://x.com/safeai

    👉 Learn more and get involved: GuardRailNow.org



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    58 m
  • Forcing Sunlight Into OpenAI | For Humanity: An AI Risk Podcast | EP68
    Aug 13 2025

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    54 m