Episodios

  • How OpenAI's ChatGPT Guided a Teen to His Death
    Aug 26 2025

    Content Warning: This episode contains references to suicide and self-harm.

    Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”

    Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.

    CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.

    If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.

    RECOMMENDED MEDIA

    The 988 Suicide and Crisis Lifeline

    Further reading on Adam’s story

    Further reading on AI psychosis

    Further reading on the backlash to GPT5 and the decision to bring back 4o

    OpenAI’s press release on sycophancy in 4o

    Further reading on OpenAI’s decision to eliminate the persuasion red line

    Kashmir Hill’s reporting on the woman with an AI boyfriend

    RECOMMENDED YUA EPISODES

    AI is the Next Free Speech Battleground

    People are Lonelier than Ever. Enter AI.

    Echo Chambers of One: Companion AI and the Future of Human Connection

    When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    CORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.

    Más Menos
    45 m
  • “Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
    Aug 14 2025
    Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all.In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years.  Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAGladstone AI’s State Department Action Plan, which discusses the loss of control risk with AIApollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo ResearchAnthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment fakingThe Trump White House AI Action PlanFurther reading on the phenomenon of more advanced AIs being better at deception.Further reading on Replit AI wiping a company’s coding databaseFurther reading on the owl example that Jeremie gaveFurther reading on AI induced psychosisDan Hendryck and Eric Schmidt’s “Superintelligence Strategy” RECOMMENDED YUA EPISODESDaniel Kokotajlo Forecasts the End of Human DominanceBehind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveThis Moment in AI: How We Got Here and Where We’re GoingCORRECTIONSTristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.
    Más Menos
    42 m
  • AI is the Next Free Speech Battleground
    Jul 31 2025

    Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.

    This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.

    In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    “The First Amendment Does Not Protect Replicants” by Larry Lessig

    More information on the Tech Justice Law Project

    Further reading on Sewell Setzer’s story

    Further reading on NYT v. Sullivan

    Further reading on the Citizens United case

    Further reading on Google’s deal with Character AI

    More information on Megan Garcia’s foundation, The Blessed Mother Family Foundation

    RECOMMENDED YUA EPISODES

    When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    AI Is Moving Fast. We Need Laws that Will Too.

    The AI Dilemma

    Más Menos
    49 m
  • Daniel Kokotajlo Forecasts the End of Human Dominance
    Jul 17 2025

    In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future.

    AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place.

    We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA
    The AI 2027 forecast from the AI Futures Project

    Daniel’s original AI 2026 blog post

    Further reading on Daniel’s departure from OpenAI

    Anthropic recently released a survey of all the recent emergent misalignment research

    Our statement in support of Sen. Grassley’s AI Whistleblower bill

    RECOMMENDED YUA EPISODES

    The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
    AGI Beyond the Buzz: What Is It, and Are We Ready?

    Behind the DeepSeek Hype, AI is Learning to Reason
    The Self-Preserving Machine: Why AI Learns to Deceive

    Clarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.


    Más Menos
    38 m
  • Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel
    Jun 26 2025

    Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?

    Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.

    We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    The Tyranny of Merit by Michael Sandel

    Democracy’s Discontent by Michael Sandel

    What Money Can’t Buy by Michael Sandel

    Take Michael’s online course “Justice”

    Michael’s discussion on AI Ethics at the World Economic Forum

    Further reading on “The Intelligence Curse”

    Read the full text of Robert F. Kennedy’s 1968 speech

    Read the full text of Dr. Martin Luther King Jr.’s 1968 speech

    Neil Postman’s lecture on the seven questions to ask of any new technology

    RECOMMENDED YUA EPISODES

    AGI Beyond the Buzz: What Is It, and Are We Ready?

    The Man Who Predicted the Downfall of Thinking

    The Tech-God Complex: Why We Need to be Skeptics

    The Three Rules of Humane Tech

    AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu

    Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

    Más Menos
    47 m
  • The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
    Jun 12 2025

    The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.

    Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.

    This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?

    We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    Tristan’s TED talk on the Narrow Path

    Sam’s 95 Theses on AI

    Sam’s proposal for a Manhattan Project for AI Safety

    Sam’s series on AI and Leviathan

    The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson

    Dario Amodei’s Machines of Loving Grace essay.

    Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey

    The Paradox of Libertarianism by Tyler Cowen

    Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference

    Further reading on surveillance with 6G

    RECOMMENDED YUA EPISODES

    AGI Beyond the Buzz: What Is It, and Are We Ready?

    The Self-Preserving Machine: Why AI Learns to Deceive

    The Tech-God Complex: Why We Need to be Skeptics

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    CORRECTIONS

    Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”

    Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

    Más Menos
    48 m
  • People are Lonelier than Ever. Enter AI.
    May 30 2025

    Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.

    And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.

    How will that change us? And what rules should we set down now to avoid the mistakes of the past?

    These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships.

    Key & Peele - Text Message Confusion

    Further reading on Hinge’s rollout of AI features

    Hinge’s AI principles

    “The Anxious Generation” by Jonathan Haidt

    “Bowling Alone” by Robert Putnam

    The NYT profile on the woman in love with ChatGPT

    Further reading on the Sewell Setzer story

    Further reading on the ELIZA chatbot

    RECOMMENDED YUA EPISODES

    Echo Chambers of One: Companion AI and the Future of Human Connection

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Más Menos
    44 m
  • Echo Chambers of One: Companion AI and the Future of Human Connection
    May 15 2025

    AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.

    But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.

    RECOMMENDED MEDIA

    Further reading on the rise of addictive intelligence

    More information on Melvin Kranzberg’s laws of technology

    More information on MIT’s Advancing Humans with AI lab

    Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use

    Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes

    Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding

    Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction

    Further reading on AI’s positivity bias

    Further reading on MIT’s “lifelong kindergarten” initiative

    Further reading on “cognitive forcing functions” to reduce overreliance on AI

    Further reading on the death of Sewell Setzer and his mother’s case against Character.AI

    Further reading on the legislative response to digital companions

    RECOMMENDED YUA EPISODES

    The Self-Preserving Machine: Why AI Learns to Deceive

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.

    Más Menos
    42 m