Sam Altman’s AI Bailout: Too Big to Fail? | Warning Shots #17
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
📢 Take Action on AI Risk
💚 Donate this Giving Tuesday
This week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates dive into a chaotic week in AI news — from OpenAI’s talk of federal bailouts to the growing tension between innovation, safety, and accountability.
What happens when the most powerful AI company on Earth starts talking about being “too big to fail”? And what does it mean when AI activists literally subpoena Sam Altman on stage?
Together, they explore:
* Why OpenAI’s CFO suggested the U.S. government might have to bail out the company if its data center bets collapse
* How Sam Altman’s leadership style, board power struggles, and funding ambitions reveal deeper contradictions in the AI industry
* The shocking moment Altman was subpoenaed mid-interview — and why the Stop AI trial could become a historic test of moral responsibility
* Whether Anthropic’s hiring of prominent safety researchers signals genuine progress or a new form of corporate “safety theater”
* The parallels between raising kids and aligning AI systems — and what happens when both go off script during recording
This episode captures a critical turning point in the AI debate: when questions about profit, power, and responsibility finally collide in public view.
If it’s Sunday, it’s Warning Shots.
📺 Watch more: @TheAIRiskNetwork
🔎 Follow our hosts:
Liron Shapira - @DoomDebates Michael - @lethal-intelligence
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com