Warning Shots Podcast Por The AI Risk Network arte de portada

Warning Shots

Warning Shots

De: The AI Risk Network
Escúchala gratis

OFERTA POR TIEMPO LIMITADO. Obtén 3 meses por US$0.99 al mes. Obtén esta oferta.
An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Política y Gobierno
Episodios
  • Warning Shots Ep. #11
    Sep 28 2025

    In this episode of Warning Shots #11, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to examine two AI storylines on a collision course:

    ⚡ OpenAI and Nvidia’s $100B partnership — a massive gamble that ties America’s economy to AI’s future

    ⚡ The U.S. government’s stance — dismissing AI extinction risk as “fictional” while pushing full speed ahead The hosts unpack what it means to build an AI-powered civilization that may soon be too big to stop:

    * Why AI data centers are overtaking human office space

    * How U.S. leaders are rejecting global safety oversight

    * The collapse of traditional career paths and the “broken chain” of skills

    * The rise of AI oligarchs with more power than governments

    This isn’t just about economics — it’s about the future of human agency in a world run by machines.

    👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.

    #AI #AISafety #ArtificialIntelligence #Economy #AIRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    17 m
  • Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10
    Sep 21 2025
    Albania just announced an AI “minister” nicknamed Diella, tied to anti-corruption and procurement screening at the Finance Ministry. The move is framed as part of its EU accession push for around 2027. Legally, only a human can be a minister. Politically, Diella is presented as making real calls.Our hosts unpack why this matters. We cover the leapfrogging argument, the brittle reality of current systems, and the arms race logic that could make governance-by-AI feel inevitable.What we explore in this episode:* What Albania actually announced and what Diella is supposed to do* The leapfrogging case: cutting corruption with AI, plus the dollarization analogy* Why critics call it PR, brittle, and risky from a security angle* The slippery slope and Moloch incentives driving delegation* AI’s creep into politics: speechwriting, “AI mayors,” and beyond* Agentic systems and financial access: credentials, payments, and attack surface* The warning shot: normalization and shrinking off-rampsWhat Albania actually announced and what Diella is supposed to doAlbania rolled out Diella, an AI branded as a “minister” to help screen procurement and fight corruption within the Finance Ministry. It’s framed as part of reforms to accelerate EU accession by ~2027. On paper, humans still hold authority. In practice, the messaging implies Diella will influence real decisions.Symbol or substance? Probably both. Even a semi-decorative role sets a precedent: once AI sits at the table, it’s easier to give it more work.The leapfrogging case: cutting corruption with AI, plus the dollarization analogySupporters say machines reduce the “human factor” where graft thrives. If your institutions are weak, offloading to a transparent, auditable system feels like skipping steps—like countries that jumped straight to mobile, or dollarized to stabilize. Albania’s Prime Minister used “leapfrog” language in media coverage.They argue that better models (think GPT-5/7+ era) could outperform corrupt or sluggish officials. For struggling states, delegating to proven AI is pitched as a clean eject button. Pragmatic—if it works.Why critics call it PR, brittle, and risky from a security angleSkeptics call it theatrics. Today’s systems hallucinate, get jailbroken, and have messy failure modes. Wrap that in state power and the stakes escalate fast. A slick demo does not equal durable governance.Security is the big red flag. You’re centralizing decisions behind prompts, weights, and APIs. If compromised, the blast radius includes budgets, contracts, and citizen trust.The slippery slope and Moloch incentives driving delegationIf an AI does one task well, pressure builds to give it two, then ten. Limits erode under cost-cutting and “everyone else is doing it.” Once workflows, vendors, and KPIs hinge on the system, clawing back scope is nearly impossible.Cue Moloch: opt out and you fall behind; opt in and you feed the race. Businesses, cities, and militaries aren’t built for coordinated restraint. That ratchet effect is the real risk.AI’s creep into politics: speechwriting, “AI mayors,” and beyondAI already ghosts a large share of political text. Expect small towns to trial “AI mayors”—even if symbolic at first. Once normalized in communications, it will seep into procurement, zoning, and enforcement.Military and economic competition will only accelerate delegation. Faster OODA loops win. The line between “assistant” and “decider” blurs under pressure.Agentic systems and financial access: credentials, payments, and attack surfaceThere’s momentum toward AI agents with wallets and credentials—see proposals like Google’s agent payment protocol. Convenient, yes. But also a security nightmare if rushed.Give an AI budget authority and you inherit a new attack surface: prompt-injection supply chains, vendor compromise, and covert model tampering. Governance needs safeguards we don’t yet have.The warning shot: normalization and shrinking off-rampsEven if Diella is mostly symbolic, it normalizes the idea of AI as a governing actor. That’s the wedge. The next version will be less symbolic, the one after that routine. Off-ramps shrink as dependencies grow.We also share context on Albania’s history (yes, the bunkers) and how countries used dollarization (Ecuador, El Salvador, Panama) as a blunt but stabilizing tool. Delegation to AI might become a similar blunt tool—easy to adopt, hard to abandon.Closing ThoughtsThis is a warning shot. The incentives to adopt AI in governance are real, rational, and compounding. But the safety, security, and accountability tech isn’t there yet. Normalize the pattern now and you may not like where the slope leads.Care because this won’t stop in Tirana. Cities, agencies, and companies everywhere will copy what seems to work. By the time we ask who’s accountable, the answer could be “the system”—and that’s no answer at all.Take Action* 📺 Watch the ...
    Más Menos
    19 m
  • The Book That Could Wake Up the World to AI Risk | Warning Shots #9
    Sep 14 2025

    This week on Warning Shots, John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) dive into one of the most important AI safety moments yet — the launch of If Anyone Builds It, Everyone Dies, the new book by Eliezer Yudkowsky and Nate Soares.

    We discuss why this book could be a turning point in public awareness, what makes its arguments so accessible, and how it could spark both grassroots and political action to prevent catastrophe.

    Highlights include:

    * Why simplifying AI risk is the hardest and most important task

    * How parables and analogies in the book make “doom logic” clear

    * What ripple effects one powerful message can create

    * The political and grassroots leverage points we need now

    * Why media often misses the urgency — and why we can’t

    * This isn’t just another episode — it’s a call to action.

    * The book launch could be a defining moment for the AI safety movement.

    🔗 Links & Resources

    🌍 Learn more about AI extinction risk: https://www.safe.ai

    📺 Subscribe to our channel for more episodes: https://www.youtube.com/@TheAIRiskNetwork

    💬 Follow the hosts:

    Liron Shapira (Doom Debates): www.youtube.com/@DoomDebate

    Michael (Lethal Intelligence): www.youtube.com/@lethal-intelligence

    #AIRisks #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Más Menos
    22 m
Todavía no hay opiniones