The People's AI: The Decentralized AI Podcast Podcast Por Jeff Wilser arte de portada

The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

De: Jeff Wilser
Escúchala gratis

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

Who will own the future of AI? The giants of Big Tech? Maybe. But what if the people could own AI, not the Big Tech oligarchs? This is the promise of Decentralized AI. And this is the podcast for in-depth conversations on topics like decentralized data markets, on-chain AI agents, decentralized AI compute (DePIN), AI DAOs, and crypto + AI. From host Jeff Wilser, veteran tech journalist (from WIRED to TIME to CoinDesk), host of the "AI-Curious" podcast, and lead producer of Consensus' "AI Summit." Season 3, presented by Vana.

© 2025 The People's AI: The Decentralized AI Podcast
Episodios
  • Generation Generative: Raising Kids with AI “Friends” in a World of Data Extraction and Bias
    Jan 7 2026

    What happens when a “kid-friendly” AI bedtime story turns racy—inside your own car?

    In this episode of The People’s AI (presented by the Vana Foundation), we explore “Generation Generative”: how kids are already using AI, what the biggest risks really are (from inappropriate content to emotional manipulation), and what practical parenting looks like when the tech is everywhere—from smart speakers to AI companions.

    We hear from Dr. Mhairi Aitken (The Alan Turing Institute) on why children’s voices are largely missing from AI governance, Dr. Sonia Tiwari on smart toys and early-childhood AI characters, and Dr. Michael Robb (Common Sense Media) on what his research is finding about teens and AI companions—plus a grounded, parent-focused conversation with journalist (and parent) Kate Morgan.

    Takeaways

    • Kids often understand AI faster—and more ethically—than adults assume (especially around fairness and bias).
    • The “AI companion” category is different from general chatbots: it’s designed to feel personal, and that can be emotionally sticky (and potentially manipulative).
    • Guardrails are inconsistent, age assurance is weak, and “safe by default” still isn’t a safe assumption.
    • The long game isn’t just content risk—it’s intimacy + data: systems that learn a child’s inner life over years may shape identity, relationships, and worldview.
    • Parents don’t need perfection—but they do need ongoing, low-drama conversations and some shared rules.

    Guests

    • Dr. Michael Robb — Head of Research, Common Sense
    • https://www.commonsensemedia.org/bio/michael-robb
    • Dr. Sonia Tiwari — Children’s Media Researcher
    • https://www.linkedin.com/in/soniastic/
    • Dr. Mhairi Aitken — Senior Ethics Fellow, The Alan Turing Institute
    • https://www.turing.ac.uk/people/research-fellows/mhairi-aitken
    • Kate Morgan — Journalist

    Presented by the Vana Foundation

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    Más Menos
    51 m
  • AI and Life After Death: Griefbots, Digital Ghosts, and the New Afterlife Economy
    Dec 17 2025

    Can AI help us grieve, or does it blur the line between comfort and delusion in ways we’re not ready for?

    In this episode of The People’s AI, we explore the rise of grief tech: “griefbots,” AI avatars, and “digital ghosts” designed to simulate conversations with deceased loved ones. We start with Justin Harrison, founder of You, Only Virtual, whose near-fatal motorcycle accident and his mother’s terminal cancer diagnosis led him to build a “Versona,” a virtual version of a person’s persona. We dig into how these systems are trained from real-world data, why “goosebump moments” matter more than perfect realism, and what it means when AI inevitably glitches or hallucinates.

    Then we zoom out with Jed Brubaker, director of The Identity Lab at CU Boulder, to look at digital legacy and the design principles that should govern grief tech, including avoiding push notifications, building “sunsets,” and confronting the risk of a “second loss” if a platform fails.

    Finally, we speak with Dr. Elaine Kasket, cyberpsychologist and counselling psychologist, about the psychological reality that grief is idiosyncratic and not scalable, the dangers of grief policing, and the deeper question beneath it all: who controls our data, identity, and access to memories after death.

    In this episode

    • Justin Harrison’s origin story and the creation of a “Versona”
    • What griefbots are, how they’re trained, and why fidelity is hard
    • The ethics: dependence, delusion risk, and “second loss”
    • Consent, rights, and the economics of data after death
    • Cultural attitudes toward death and why Western discomfort shapes the debate
    • A provocative question: if relationships persist digitally, what does “dead” even mean?

    Presented by the Vana Foundation. Learn more at vana.org.

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Más Menos
    53 m
  • The Invisible (and Underpaid) Data Workers Behind the "Magic" of AI
    Dec 3 2025

    Who are the invisible human data-workers behind the “magic” of AI, and what does their work really look like?

    In this episode of THE PEOPLE'S AI, presented by Vana, We pull back the curtain on AI data labeling, ghost work, and content moderation with former data worker and organizer Krystal Kauffman and AI researcher Graham Morehead. We hear how low-paid workers around the world train large language models, power RLHF safety systems, and scrub the worst content off the internet so the rest of us never see it.

    We trace the journey from early data labeling projects and Amazon Mechanical Turk to today’s global workforce of AI data workers in the US, Latin America, Kenya, India, and beyond. We talk about trauma, below-minimum-wage pay, and the ethical gray zones of labeling surveillance imagery and moderating violence. We also explore how workers are organizing through projects like the Data Workers Inquiry at the Distributed AI Research Institute (DAIR), and why data sovereignty and user-owned data are part of the long-term solution.

    Along the way, we ask a simple question with complicated answers: if AI depends on human labor, what do those humans deserve?

    Timestamps:

    • 0:02 – Krystal’s life as an AI data worker and the “10 cents a minute” rule
    • 2:40 – What is data labeling, and why AI can’t exist without it
    • 6:20 – RLHF, safety, and the hidden workforce grading AI outputs
    • 9:53 – Amazon Mechanical Turk and building Alexa, image datasets, and more
    • 14:42 – Labeling border crossings and the ethics of unknowable end uses
    • 25:00 – Kenyan content moderators, trauma, and extreme exploitation
    • 32:09 – Turker organizing, Turker-run ratings, and early resistance
    • 33:12 – DAIR, the Data Workers Inquiry, and workers investigating their own workplaces
    • 36:43 – Unionization, political pressure, and reasons for hope
    • 41:05 – Why humans will keep “labeling” AI in everyday life for years to come

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Más Menos
    45 m
Todavía no hay opiniones