Is AI as Safe As We Think it Is?
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
In this episode, Jonathan sits down with clinical psychologist and AI ethics expert Dr. Sonja Batten to explore a critical question: Is AI as safe as we think it is—especially when it comes to our mental health, our kids, and vulnerable populations like veterans?
Dr. Batten brings decades of experience in mental health, military/veteran care, and systems-level policy to unpack how AI is already interacting with loneliness, depression, social skills, and even national security. Together, she and Jonathan examine where AI can genuinely help—and where it can quietly make things much worse.
Key Topics Covered:
- AI “Practice Girlfriends” & Parasocial Relationships
- Pseudo-Emotion vs. Real Emotion in AI
- Depression, Rumination, and AI as an Accelerant
- Kids, Screens, and Personality Shifts
- Awkward Questions & Suicide Risk
- Veterans, Targeted Manipulation & National Security
- Building Real-World Social Skills in an AI World
- The Profit Motive Behind AI & Big Platforms
- A Practical Safety Strategy: Don’t Talk to Just One AI
- A Better Future: Human-in-the-Loop Mental Health AI
- Bias, Data, and Why “Objective” AI Can Still Harm People
- What’s Safe to Use Today—and What Isn’t
Notable Quotes:
- “The problem isn’t that AI sounds human—it’s that it acts like it’s human, without any judgment about whether what it’s saying is actually helpful.” – Dr. Sonja Batten
- “Rumination is like how a cow digests grass—except you’re doing it with your depressive thoughts. AI can actually accelerate that cycle.” – Dr. Sonja Batten
- “The earlier I act in a depressive cycle, the easier it is to break it. But AI gives you the illusion of a conversation while keeping you stuck in place.” – Jonathan Green
- “Ask the awkward question. If you’re not sure whether someone’s joking or asking for help, just ask. The worst thing that happens is they tell you you’re wrong—and now they know you care.” – Dr. Sonja Batten
- “I don’t think there’s any AI tool yet that I’d trust as a standalone resource for my own daughter if she were depressed.” – Dr. Sonja Batten
- “We can’t afford to get it wrong in mental health. The stakes are too high.” – Dr. Sonja Batten
Key Resource Mentioned:
- 988 – Suicide & Crisis Lifeline (U.S.)
If you or someone you know is struggling, you can call or text 988 in the United States for immediate support and connection to local resources. This is a 24/7 crisis line.
Connect with Dr. Sonja Batten:
- LinkedIn: https://www.linkedin.com/in/sonja-batten/
If you’re interested in how AI intersects with mental health, parenting, veterans’ issues, and public safety—and you want a grounded, clinically informed view of both the risks and the potential—this episode is essential listening.
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast