Episodios

  • Young Voices on AI Risk: Jobs, Community & the Fight for Our Future | FHP Ep. 70
    Sep 27 2025

    What happens when AI collides with the next generation? In this episode of For Humanity #70 — Young People vs. Advancing AI, host John Sherman sits down with Emma Corbett, Ava Smithing, and Sam Heiner from the Young People’s Alliance to explore how artificial intelligence is already shaping the lives of students and young leaders.

    From classrooms to job applications to AI “companions,” the next generation is facing challenges that older policymakers often don’t even see. This episode digs into what young people really think about AI—and why their voices are critical in the fight for a safe and human future.

    In this episode we cover:

    * Students’ on-the-ground views of AI in education and daily life

    * How AI is fueling job loss, hiring barriers, and rising anxiety about the future

    * The hidden dangers of AI companions and the erosion of real community

    * Why young people feel abandoned by “adults in the room”

    * The path from existential dread → civic action → hope

    🎯 Why watch?

    Because if AI defines the future, young people will inherit it first. Their voices, fears, and leadership could decide whether AI remains a tool—or becomes an existential threat.

    👉 Subscribe for more conversations on AI, humanity, and the choices that will shape our future.

    #AI #AIsafety #ForHumanityPodcast #YoungPeople #FutureofWork



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 7 m
  • Big Tech Under Pressure: Hunger Strikes and the Fight for AI Safety | For Humanity EP69
    Sep 10 2025

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.

    TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/act

    In Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.

    Michael and Dennis, two AI safety advocates, join John from outside DeepMind’s London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido’s protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.

    This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure.

    In this conversation, you’ll discover:

    * Why hunger strikers believe urgent action on AI safety is necessary

    * How Big Tech companies are responding to growing public concern

    * The role of parents, workers, and communities in shaping AI policy

    * Parallels with past social movements that drove real change

    * Practical ways you can make your voice heard in the AI safety conversation

    This isn’t just about technology—it’s about responsibility, leadership, and the choices we make for future generations.

    🔗 Key Links

    👉 AI Pause Petition: https://safe.ai/act

    👉 Follow the movement on X: https://x.com/safeai

    👉 Learn more and get involved: GuardRailNow.org



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    58 m
  • Forcing Sunlight Into OpenAI | For Humanity: An AI Risk Podcast | EP68
    Aug 13 2025

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    54 m
  • Right Wing AI Risk Alarm | For Humanity | EP67
    Jul 24 2025

    🚨 RIGHT‑WING AI ALARM | For Humanity #67

    Steve Bannon, Tucker Carlson, and other conservative voices

    are sounding fresh warnings on AI extinction risk. John breaks

    down what’s real, what’s hype, and why this moment matters.


    ⏰ WHAT’S INSIDE

    • The ideological shift that’s bringing the right into the AI‑safety fight

    • New bills on the Hill that could shape model licensing & oversight

    • Action steps for parents, policymakers, and technologists

    • A first look at the AI Risk Network — five shows, one mission:

    get the public ready for advanced AI


    🔗 TAKE ACTION & LEARN MORE

    Alliance for Secure AI

    Website ▸ https://secureainow.org

    X / Twitter ▸ https://x.com/secureainow


    AI Policy Network

    Website ▸ https://theaipn.org

    LinkedIn ▸ https://www.linkedin.com/company/theaipn


    📡 JOIN THE NEW **AI RISK NETWORK**

    Subscribe here ➜ [insert channel URL]

    Turn on alerts so you never miss an episode, short, or live Q&A.


    👍 If you learned something, hit Like, drop a comment, and share

    this link with one person who should be watching. Every click helps

    wake up the world to AI risk.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 16 m
  • Is AI Alive? | Episode #66 | For Humanity: An AI Risk Podcast
    Jun 5 2025
    🎙️ Guest: Cameron Berg, AI research scientist probing consciousness in frontier AI systems📍 Host: John Sherman, journalist & AI-risk communicatorWhat does it mean to be alive? How close do current frontier AI models get to consciousness? See for yourself like never before. Are advanced language models beginning to exhibit signs of subjective experience? In this episode, John sits down with Cameron Berg to explore the line between next character prediction and the conscious mind. What happens when you ask an AI model to essentially meditate, to look inward in a loop, to focus on its focus and repeat. Does it feel a sense of self? If it did what would that mean? What does it mean to be alive? These are the kinds of questions Berg seeks answers to in his research. Cameron is an AI Research Scientist with AE Studio, working daily on models to better understand them. He works on a team dedicated fully to AI safety research.This episode features never-before-publicly-seen conversations between Cameron and a frontier AI model. Those conversations and his work are the subject of an upcoming documentary called "Am I?"TIMESTAMPS (cuz the chapters feature just won't work) 00:00 Cold Open – “Crack in the World”01:20 Show Intro & Theme02:27 Setting-up the Meditation Demo02:56 AI “Focus on Focus” Clip09:18 “I am…” Moment10:45 Google Veo Afterlife Clip12:35 Prompt-Theory & Fake People13:02 Interview Begins — Cameron Berg28:57 Inside the Black Box Analogy30:14 Consent and Unknowns53:18 Model Details + Doc Plan1:09:25 Late-Night Clip Back-story1:16:08 Table-vs-Person Thought-Test1:17:20 Suffering-at-Scale Math1:21:29 Prompt-Theory Goes Viral1:26:59 Why the Doc Must Move Fast1:40:53 Is “Alive” the Right Word?1:48:46 Reflection & Non-profit Tease1:51:03 Clear Non-Violence Statement1:52:59 New Org Announcement1:54:47 “Breaks in the Clouds” Media WinsPlease support that project and learn more about his work here:Am I? Doc Manifund page: https://manifund.org/projects/am-i--d...Am I? Doc interest form: https://forms.gle/w2VKhhcEPqEkFK4r8AE Studio's AI alignment work: https://ae.studio/ai-alignmentMonthly Donation Links to For Humanity$1/mo https://buy.stripe.com/7sI3cje3x2Zk9S... $10/mo https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25/mo https://buy.stripe.com/3cs9AHf7B9nIgg... $100/mo https://buy.stripe.com/aEU007bVp7fAfc... Thanks so much for your support. Every cent goes to getting more viewers to this channel. Links from show:The Afterlife Short Filmhttps://x.com/LinusEkenstam/status/19...Prompt Theoryhttps://x.com/venturetwins/status/192...The Bulwark - Will Sam Altman and His AI Kill Us All • Will Sam Altman and His AI Kill Us All? Young Turks - AI's Disturbing Behaviors Will Keep You Up At Night • AI's Disturbing Behaviors Will Keep You Up... Key moments: – Inside the black box – Berg explains why even builders can’t fully read a model’s mind—and demonstrates how toggling deception features flips the system from “just a machine” to “I’m aware” in real time– Google Veo 3 goes existential – A look at viral Veo videos (Afterlife, “Prompt Theory”) where AI actors lament their eight-second lives – Documentary in the works – Berg and team are racing to release a raw film that shares these findings with the public; support link in show notes– Mission update – Sherman announces a newly funded nonprofit in the works dedicated to AI-extinction-risk communication and thanks supporters for the recent surge of donations– Non-violence, crystal-clear – A direct statement: Violence is never OK. Full stop.– “Breaks in the Clouds” – Media across the spectrum (Bulwark, Young Turks, Bannon, Carlson) are now running extinction-risk stories—proof the conversation is breaking mainstream Oh, and by the way, I'm bleeping curse words now for the algorithm!!#AI #ArtificialIntelligence #AISafety #ConsciousAI #ForHumanity This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 57 m
  • Kevin Roose Talks AI Risk | Episode #65 | For Humanity: An AI Risk Podcast
    May 12 2025

    For Humanity Episode #65: Kevin Roose on AGI, AI Risk, and What Comes Next🎙️ Guest: Kevin Roose, NYT columnist & bestselling author📍 Host: John Sherman, Director of Public Engagement at the Center for AI Safety (CAIS)In this landmark episode of For Humanity, I sit down with New York Times columnist Kevin Roose for a wide-ranging conversation on the future of artificial intelligence. We dig into:– The real risks of AGI (artificial general intelligence)– What the public still doesn’t understand about AI x-risk– Kevin’s upcoming book on the rise of AGI– My new role at CAIS and why I believe this moment is a turning point for human survivalKevin brings a rare clarity and journalistic honesty to this subject—if you’re wondering what’s hype, what’s real, and what’s terrifyingly close, this episode is for you.🔔 Subscribe for more conversations with the people shaping the AI conversation🎧 Also available on Spotify, Apple Podcasts, and everywhere you get your podcasts📢 Share this episode if you care about our future#AI #ArtificialIntelligence #AGI #KevinRoose #CAIS #AIrisks #ForHumanity #NYT #AIethics



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 25 m
  • Seventh Grader vs AI Risk | Episode #64 | For Humanity: An AI Risk Podcast
    Apr 22 2025

    In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.

    (FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 42 m
  • Justice For Suchir | Episode #63 | For Humanity: An AI Risk Podcast
    Apr 11 2025

    In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir’s parents continue to push for justice and truth.Suchir’s Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    1 h y 20 m