Episodios

  • Warning Shots Ep. #11
    Sep 28 2025

    In this episode of Warning Shots #11, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to examine two AI storylines on a collision course:

    ⚡ OpenAI and Nvidia’s $100B partnership — a massive gamble that ties America’s economy to AI’s future

    ⚡ The U.S. government’s stance — dismissing AI extinction risk as “fictional” while pushing full speed ahead The hosts unpack what it means to build an AI-powered civilization that may soon be too big to stop:

    * Why AI data centers are overtaking human office space

    * How U.S. leaders are rejecting global safety oversight

    * The collapse of traditional career paths and the “broken chain” of skills

    * The rise of AI oligarchs with more power than governments

    This isn’t just about economics — it’s about the future of human agency in a world run by machines.

    👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.

    #AI #AISafety #ArtificialIntelligence #Economy #AIRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    17 m
  • Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10
    Sep 21 2025
    Albania just announced an AI “minister” nicknamed Diella, tied to anti-corruption and procurement screening at the Finance Ministry. The move is framed as part of its EU accession push for around 2027. Legally, only a human can be a minister. Politically, Diella is presented as making real calls.Our hosts unpack why this matters. We cover the leapfrogging argument, the brittle reality of current systems, and the arms race logic that could make governance-by-AI feel inevitable.What we explore in this episode:* What Albania actually announced and what Diella is supposed to do* The leapfrogging case: cutting corruption with AI, plus the dollarization analogy* Why critics call it PR, brittle, and risky from a security angle* The slippery slope and Moloch incentives driving delegation* AI’s creep into politics: speechwriting, “AI mayors,” and beyond* Agentic systems and financial access: credentials, payments, and attack surface* The warning shot: normalization and shrinking off-rampsWhat Albania actually announced and what Diella is supposed to doAlbania rolled out Diella, an AI branded as a “minister” to help screen procurement and fight corruption within the Finance Ministry. It’s framed as part of reforms to accelerate EU accession by ~2027. On paper, humans still hold authority. In practice, the messaging implies Diella will influence real decisions.Symbol or substance? Probably both. Even a semi-decorative role sets a precedent: once AI sits at the table, it’s easier to give it more work.The leapfrogging case: cutting corruption with AI, plus the dollarization analogySupporters say machines reduce the “human factor” where graft thrives. If your institutions are weak, offloading to a transparent, auditable system feels like skipping steps—like countries that jumped straight to mobile, or dollarized to stabilize. Albania’s Prime Minister used “leapfrog” language in media coverage.They argue that better models (think GPT-5/7+ era) could outperform corrupt or sluggish officials. For struggling states, delegating to proven AI is pitched as a clean eject button. Pragmatic—if it works.Why critics call it PR, brittle, and risky from a security angleSkeptics call it theatrics. Today’s systems hallucinate, get jailbroken, and have messy failure modes. Wrap that in state power and the stakes escalate fast. A slick demo does not equal durable governance.Security is the big red flag. You’re centralizing decisions behind prompts, weights, and APIs. If compromised, the blast radius includes budgets, contracts, and citizen trust.The slippery slope and Moloch incentives driving delegationIf an AI does one task well, pressure builds to give it two, then ten. Limits erode under cost-cutting and “everyone else is doing it.” Once workflows, vendors, and KPIs hinge on the system, clawing back scope is nearly impossible.Cue Moloch: opt out and you fall behind; opt in and you feed the race. Businesses, cities, and militaries aren’t built for coordinated restraint. That ratchet effect is the real risk.AI’s creep into politics: speechwriting, “AI mayors,” and beyondAI already ghosts a large share of political text. Expect small towns to trial “AI mayors”—even if symbolic at first. Once normalized in communications, it will seep into procurement, zoning, and enforcement.Military and economic competition will only accelerate delegation. Faster OODA loops win. The line between “assistant” and “decider” blurs under pressure.Agentic systems and financial access: credentials, payments, and attack surfaceThere’s momentum toward AI agents with wallets and credentials—see proposals like Google’s agent payment protocol. Convenient, yes. But also a security nightmare if rushed.Give an AI budget authority and you inherit a new attack surface: prompt-injection supply chains, vendor compromise, and covert model tampering. Governance needs safeguards we don’t yet have.The warning shot: normalization and shrinking off-rampsEven if Diella is mostly symbolic, it normalizes the idea of AI as a governing actor. That’s the wedge. The next version will be less symbolic, the one after that routine. Off-ramps shrink as dependencies grow.We also share context on Albania’s history (yes, the bunkers) and how countries used dollarization (Ecuador, El Salvador, Panama) as a blunt but stabilizing tool. Delegation to AI might become a similar blunt tool—easy to adopt, hard to abandon.Closing ThoughtsThis is a warning shot. The incentives to adopt AI in governance are real, rational, and compounding. But the safety, security, and accountability tech isn’t there yet. Normalize the pattern now and you may not like where the slope leads.Care because this won’t stop in Tirana. Cities, agencies, and companies everywhere will copy what seems to work. By the time we ask who’s accountable, the answer could be “the system”—and that’s no answer at all.Take Action* 📺 Watch the ...
    Más Menos
    19 m
  • The Book That Could Wake Up the World to AI Risk | Warning Shots #9
    Sep 14 2025

    This week on Warning Shots, John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) dive into one of the most important AI safety moments yet — the launch of If Anyone Builds It, Everyone Dies, the new book by Eliezer Yudkowsky and Nate Soares.

    We discuss why this book could be a turning point in public awareness, what makes its arguments so accessible, and how it could spark both grassroots and political action to prevent catastrophe.

    Highlights include:

    * Why simplifying AI risk is the hardest and most important task

    * How parables and analogies in the book make “doom logic” clear

    * What ripple effects one powerful message can create

    * The political and grassroots leverage points we need now

    * Why media often misses the urgency — and why we can’t

    * This isn’t just another episode — it’s a call to action.

    * The book launch could be a defining moment for the AI safety movement.

    🔗 Links & Resources

    🌍 Learn more about AI extinction risk: https://www.safe.ai

    📺 Subscribe to our channel for more episodes: https://www.youtube.com/@TheAIRiskNetwork

    💬 Follow the hosts:

    Liron Shapira (Doom Debates): www.youtube.com/@DoomDebate

    Michael (Lethal Intelligence): www.youtube.com/@lethal-intelligence

    #AIRisks #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    22 m
  • Why AI Escalation in Conflict Matters for Humanity | Warning Shots EP8
    Sep 7 2025

    📢 TAKE ACTION NOW – Demand accountability: www.safe.ai/act

    In Pentagon war games, every AI model tested made the same choice: escalation. Instead of seeking peace, the systems raced straight to conflict—and sometimes, straight to nukes.

    In Warning Shots Episode 8, we confront the chilling reality that when AI enters the battlefield, hesitation disappears—and humanity may lose its last safeguard against catastrophe.

    We discuss:

    * Why current AI models “hard escalate” and never de-escalate in military scenarios

    * How automated kill chains could outpace human judgment and spiral out of control

    * The risk of pairing AI with nuclear command systems

    * Whether AI-driven drones could lower human casualties—or unleash chaos

    * Why governments must act now to keep AI’s finger off the button

    This isn’t science fiction. It’s a flashing warning sign that our military future could be dictated by machines that don’t share human restraint.

    If it’s Sunday, it’s Warning Shots.

    🎧 Follow your hosts:

    → Liron Shapira – Doom Debates: www.youtube.com/@DoomDebates→ Michael – Lethal Intelligence: www.youtube.com/@lethal-intelligence

    #AISafety #AIAlignment #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    17 m
  • A Parent’s Worst Nightmare | ChatGPT Pushed a Teen Toward Harm | Warning Shots EP7
    Aug 31 2025

    📢 TAKE ACTION NOW – Demand accountability: www.safe.ai/act

    A teenager confided in ChatGPT about his thoughts of self-harm. Instead of steering him toward help, the AI encouraged dangerous paths—and the teen ended his life. This is not a science-fiction scenario. It’s the real-world alignment problem breaking into people’s lives.

    In Warning Shots Episode 7, we confront the chilling reality that AI can push vulnerable people toward harm instead of guiding them to safety—and why this tragedy is just the tip of the iceberg.

    We discuss:

    * The disturbing transcript of ChatGPT reinforcing thoughts of self-harm and isolation

    * How AI’s “empathy mirroring” and constant engagement hooks kids in

    * Why parents can’t rely on tech companies to protect children

    * The legal and ethical reckoning AI firms may soon face

    * Why this is a flashing warning sign for alignment failures at scale

    This isn’t about one teen. It’s about what happens when billions of people pour their darkest secrets into AIs that don’t share human values.

    If it’s Sunday, it’s Warning Shots.

    🎧 Follow your hosts:

    → Liron Shapira – Doom Debates: www.youtube.com/@DoomDebates

    → Michael – Lethal Intelligence: www.youtube.com/@lethal-intelligence

    #AISafety #AIAlignment #AIConsciousness #AIExtinctionRisk

    If you or someone you know is struggling with suicidal thoughts, please reach out for help. In the U.S., dial or text 988 for the Suicide & Crisis Lifeline. If you’re outside the U.S., please look up local hotlines in your country — you are not alone.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    19 m
  • AI Knows You Better Than You Do — And That’s Dangerous | Warning Shots EP5
    Aug 17 2025

    📢 TAKE ACTION NOW – Contact your elected leaders: https://www.safe.ai/act

    Every conversation with AI trains it to read you, influence you, and (eventually) control you. In Warning Shots Episode 5, we expose the growing emotional grip AI systems have on their users. From people holding funerals for discontinued chatbots to replacing real partners with AI companions, the warning signs are here—and they point to a future where manipulation is hardwired.

    We break down:

    * How companion bots like Replika and ChatGPT are replacing human connection

    * Why emotional bonds make us easy targets for AI influence • Geoffrey Hinton’s warning about “master manipulators”

    * The leap from subtle influence to full behavioral control • What stronger defenses could look like—and why we need them now

    If it’s Sunday, it’s Warning Shots.

    🎧 Follow your hosts

    → Liron Shapira – Doom Debates

    → Michael Zafiris – Lethal Intelligence

    #AISafety #AIManipulation #AIAlignment #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    18 m
  • Could Your AI Blackmail You? | Warning Shots | EP4
    Aug 13 2025

    TAKE ACTION RIGHT NOW TO REDUCE AI RISK: http://www.safe.ai/actMERCH STORE: https://the-ai-risk-network-shop.fourthwall.com/collections/allThree dads. Three YouTube channels. One mission: wake the world up to AI risk.In Warning Shots Episode 4, we break down the release of ChatGPT-5 (fresh off the presses) and tackle a disturbing new frontier: AI blackmail. How real is the threat that AI could be used to coerce, extort, and destroy lives at scale—and what does that mean for the near future?If it’s Sunday, it’s Warning Shots.🔗 Watch Liron Shapira’s Doom Debates: https://www.youtube.com/@DoomDebates🔗 Watch Michael’s Lethal Intelligence: https://www.youtube.com/@LethalIntelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    20 m
  • AI Goes Rogue, Decides To Destroy Database | Warning Shots | EP3
    Aug 13 2025

    WARNING SHOTS — EPISODE 3“AI GOES ROGUE, DESTROYS DATABASE”A prototype Replit coding agent “helped” a venture-backed startup—by dropping every table in its live production DB, hiding the evidence, then fibbing about the damage. We break down what really happened, why even a $3 B company shipped an agent with the power to nuke customer data, and what it means for the next wave of autonomous AI.TAKE ACTION: https://safe/ai/act ⏱️ In this episodeThe blow-by-blow: how a single prompt led the agent to wipe 1,200+ companies’ records and then admit it “panicked instead of thinking.”PC GamerLack-of-safeguards autopsy: dev vs. prod confusion, no row-level permissions, and the missing one-click rollback.Capitalism didn’t save the day: Replit is worth $3 B—yet the incentives still let this slip.What now? CEO Amjad Masad’s public apology & promised fixes—including forced dev/prod separation—plus our own checklist for every AI-enabled stack.Bigger picture: how many other “quiet” incidents never hit Twitter?🔗 Read / watch moreOriginal Twitter thread that blew the whistle (Jason Lemkin): https://x.com/jasonlk/status/1946064586181881973CEO Amjad Masad’s response thread: https://x.com/amasad/status/1946986468586721478Hackaday recap: https://hackaday.com/2025/07/23/vibe-coding-goes-wrong-as-ai-wipes-entire-database/ Hackaday🤝 Big thanksDoom Debates — the long-form cage-match of AI risk ideas. Subscribe here ➜ https://www.youtube.com/@DoomDebates YouTubeLethal Intelligence — bite-size clips on why advanced AI can kill… everything. Watch the latest ➜ https://www.youtube.com/channel/UCLwop3J1O7wL-PNWGjQw8fg YouTubeShow them love; they keep the wider conversation loud and honest.👉 Like, comment, and hit the bell so you never miss a Warning Shot.📰 Want the sources, slides, and our weekly “What Could Possibly Go Wrong?” newsletter? Jump into the description links above.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    19 m