Your Undivided Attention Podcast Por The Center for Humane Technology Tristan Harris Daniel Barcay and Aza Raskin arte de portada

Your Undivided Attention

Your Undivided Attention

De: The Center for Humane Technology Tristan Harris Daniel Barcay and Aza Raskin
Escúchala gratis

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.2019-2025 Center for Humane Technology Ciencia Política Ciencias Sociales Política y Gobierno Relaciones
Episodios
  • FEED DROP: Possible with Reid Hoffman and Aria Finger
    Feb 5 2026

    This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org.

    This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape.

    In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.

    This is the kind of conversation that needs to happen more in tech. Reed has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.

    RECOMMENDED MEDIA

    Aza’s first appearance on “Possible”

    The website for Earth Species Project

    “Amusing Ourselves to Death” by Neil Postman

    The Moloch’s Bargain paper from Stanford

    RECOMMENDED YUA EPISODES

    The Man Who Predicted the Downfall of Thinking

    America and China Are Racing to Different AI Futures

    Talking With Animals... Using AI

    How OpenAI's ChatGPT Guided a Teen to His Death


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Más Menos
    1 h y 7 m
  • Attachment Hacking and the Rise of AI Psychosis
    Jan 21 2026

    Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.

    The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.

    Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.

    In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.

    If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org.

    This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services

    RECOMMENDED MEDIA

    The website for the AI Psychological Harms Research Coalition

    Further reading on AI Pscyhosis

    The Atlantic article on LLM-ings outsourcing their thinking to AI

    Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”

    RECOMMENDED YUA EPISODES

    How OpenAI's ChatGPT Guided a Teen to His Death
    People are Lonelier than Ever. Enter AI.
    Echo Chambers of One: Companion AI and the Future of Human Connection

    Rethinking School in the Age of AI

    CORRECTIONS

    After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium

    Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.

    Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses.






    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Más Menos
    51 m
  • What Would It Take to Actually Trust Each Other? The Game Theory Dilemma
    Jan 8 2026

    So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.

    This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world.

    In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. Sonja Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.”

    RECOMMENDED MEDIA

    “Prisoners of Reason: Game Theory and the Neoliberal Economy” by Sonja Amadae (2015)

    The Cambridge Centre for the Study of Existential Risk

    “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944)

    Further reading on the importance of trust in Finland

    Further reading on Abraham Maslow’s Hierarchy of Needs

    RAND’s 2024 Report on Strategic Competition in the Age of AI

    Further reading on Marshall Rosenberg and nonviolent communication

    The study on self/other overlap and AI alignment cited by Aza

    Further reading on The Day After (1983)

    RECOMMENDED YUA EPISODES

    America and China Are Racing to Different AI Futures

    The Crisis That United Humanity—and Why It Matters for AI

    Laughing at Power: A Troublemaker’s Guide to Changing Tech

    The Race to Cooperation with David Sloan Wilson

    Clarifications:

    • The proposal for a federal preemption on AI was enacted by President Trump on December 11, 2025, shortly after this recording.
    • Aza said that "The Day After" was the most watched TV event in history when it aired. It was actually the most watched TV film, the most watched TV event was the finale of MASH

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Más Menos
    45 m
Todas las estrellas
Más relevante
Thinking back over the past ten years our lives have been consistently nudged by a small group of elite business people living on the west coast of Northern California. Driven by a need to maximize returns for capital investors and employee stockholders, these people stitched the disparate lives of citizens around the globe of many countries and states into expansive for-profit social networks. Now the threads tying billions of people into these social networks tug us in directions known and unknown, but primarily away from patience, presence, connection, and toward outrage, polarization and consumerism. While the effects of these trends on our individual and collective psychology have been rarely noticed and generally neglected until now, a growing movement has begun to pull back the curtain. We are angry with the manipulation, and intent on fixing it.

This podcast serially lays out in no uncertain terms the magnitude of the issue and possible paths forward. With guests who number among the most active and influential whistleblowers on this topic, it has become a comprehensive and inspiring guide to reclaiming our freedom in the digital space. Even beyond, it lays out various competing theories for constructing a socially, economically, and politically fair society that elevates human strengths instead of exploiting human weakness.

I Wish This Was Played In Schools

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.

This analytical summary shifted my consciousness. This format is so helpful. We should name and characterize this presentation format. I THINK THIS IS THE METHOD TO ENABLE PARALLEL learning and legislation. Thank You, Ben

Parallel Learning Legislation

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.

This is a powerful “marriage” of insightful questioning and deep expertise. I could feel my IQ going up. 😉

Head Blown!

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.

The absolute best podcast for learning about how machine learning algorithms are unregulated recipes for disaster.

The forefront of the fight against suggestion

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.