Episodios

  • OpenClaw: Multi-agent autonomy, secrets, and blast radius
    Apr 7 2026

    OpenClaw is what happens when the industry looks at autonomous agents and decides they should have more autonomy, more persistence, and more chances to surprise you. In this episode of Pop Goes the Stack, Lori MacVittie hosts a wide-ranging discussion with F5's Joel Moses, Jason Rahm, and Kunal Anand on what makes OpenClaw different from the usual “AI assistant” narrative: agents that coordinate, remember, adapt, and operate in shared spaces where emergent behavior is a feature, not a bug.

    Joel shares a grounded example of using OpenClaw locally for home automation, keeping the blast radius contained while still seeing the upside of continuous, autonomous decision-making. From there, the group digs into what breaks when you move this model toward enterprise operations: persistence of secrets, unclear approval workflows, weak auditability, limited rollback, and the sheer difficulty of diagnosing why an agent took an action after weeks of chained decisions.

    Kunal expands the conversation to the ecosystem forming around OpenClaw, including experimental offshoots and the uncomfortable reality that “just read the code” doesn’t scale when modern projects are moving at AI-assisted commit velocity. Jason adds a longer lens, drawing a parallel to Ray Bradbury’s "There Will Come Soft Rains" as a reminder that autonomous systems can keep running even when humans stop being in the loop, raising questions beyond tech into how we relate to each other.

    Tune in for the groups practical takeaways as this technology makes it's way toward the enterprise.

    Read Kunal's blog diving into mechanistic interpretability: https://kunalanand.com/2026-03-19-your-token-is-a-wonderland/

    Read "There Will Come Soft Rains" by Ray Bradbury: https://www.btboces.org/Downloads/7_There%20Will%20Come%20Soft%20Rains%20by%20Ray%20Bradbury.pdf


    Recorded March 2nd, 2026

    Más Menos
    27 m
  • CISO Hot Takes on MCP, PQC, and Data Center Attacks
    Mar 31 2026

    Recorded live at F5 AppWorld 2026 in Las Vegas, this episode of Pop Goes the Stack puts Field CISO Chuck Herrin in the hot seat for a fast-moving conversation on what security leaders are really dealing with right now. Joel Moses kicks things off with the agentic AI debate: if teams bypass structured tool interfaces and let agents “just use the CLI,” what happens to authentication, observability, and predictability when autonomy accelerates faster than humans can keep up?


    From there, Chuck makes the case that fear is a poor long-term strategy for running a business, even when the threats are real. He unpacks the tension he’s seeing across organizations, where executives are driven by FOMO while employees wrestle with FOBO (fear of becoming obsolete), and argues that companies get results when they redesign how they operate rather than bolting AI onto old structures.


    The conversation shifts to post-quantum cryptography and why it still isn’t getting the attention it deserves. Chuck explains how “future tech” framing, short CISO tenures, and the pressure of today’s fires keep PQC from becoming a priority, even as harvest-now-decrypt-later attacks make it a present-day risk. His advice is practical: assign clear ownership, treat the effort like business continuity planning, and include your supply chain in the readiness scope.


    Finally, they touch on a new class of concern for CISOs: kinetic targeting of data center infrastructure, and how sovereignty requirements can constrain options when physical risk rises. If you’re navigating AI adoption, cryptographic transition, or resilience planning, tune in for a grounded perspective from the show floor.

    Más Menos
    17 m
  • AI Red Teaming in Practice: Scores, guardrails, auto-remediation
    Mar 24 2026

    AI in production isn’t just another feature to ship. It’s a non-deterministic system that can be socially engineered, fuzzed, and pushed into failure states you won’t find with traditional testing. Recorded live in Las Vegas at F5’s AppWorld 2026, this episode of Pop Goes the Stack brings Joel Moses together with Jimmy White, F5’s VP of AI Security (via the CalypsoAI acquisition), for a practical look at what AI red teaming actually is and how it works when the attacker is an agent.

    Jimmy reframes genAI security as a permutation problem: if there are countless prompt combinations that could unlock sensitive data or trigger unsafe actions, you need genAI-powered red team agents to explore those paths at scale. The discussion covers custom intents, agentic “fingerprints” that reveal not just what was compromised but how it happened, and why that “how” is the key to building protections you can trust.

    You’ll also hear how scoring and reporting translate into guardrails, how auto-remediation can be validated with positive and negative test cases before a human publishes changes, and why relying on models to internalize safety isn’t a realistic plan. The conversation closes on agentic AI risk, where tools and permissions matter more than the model’s reasoning, and introduces “thought injection” as a way to redirect unsafe actions without breaking the agent loop.

    If you’re building AI apps, deploying MCP-connected systems, or worrying about agents becoming tomorrow’s service accounts, this episode gives you a sharper playbook for testing, governance, and resilience.

    Más Menos
    27 m
  • Agent Identity Crisis: Access, audit, and “soul.md”
    Mar 17 2026

    Coming to you from the AppWorld show floor, Joel Moses and guest co-pilot Oscar Spencer cut through the conference polish to tackle a problem that’s quickly becoming unavoidable: identity in the era of agentic AI. When software can act on your behalf, take initiative, and even spawn other agents, “who did what” stops being a philosophical question and becomes an audit, security, and governance requirement.


    Joined by F5's Chief Product Officer, Kunal Anand, the conversation digs into why traditional, point-in-time authentication and authorization models don’t map cleanly to agents that operate over time, across contexts, and through chains of delegation. They explore the risks of transitive identity, the expanding blast radius when Agent A creates Agents B and C, and the uncomfortable reality that agents can end up holding the same kinds of long-lived secrets that have historically caused production incidents.


    Along the way, they discuss emerging ideas like soul.md files that define an agent’s purpose and constraints, and the concept of a dedicated “credential agent” that acts as a gatekeeper for secrets access. The episode also gets practical about what breaks in the real world, including a cautionary story about an agent corrupting a long-running notes database, underscoring why backups, guardrails, and careful rollout matter.


    If you’re building or adopting agents, this is a timely look at why identity can’t stay static, why service-account thinking is coming for every agent, and what it will take to keep autonomy from turning into the next incident report.

    Más Menos
    21 m
  • VibeOps: Guardrailed agents for deterministic production
    Mar 10 2026

    Ops used to be a world of YAML, caffeine, and careful deploy rituals. Now it’s probabilistic models, token-based cost surprises, and reliability questions that sound more like, “Will the model mean the same thing tomorrow?” In this episode of Pop Goes the Stack, Lori MacVittie and Joel Moses dig into what happens when production expectations collide with non-deterministic AI systems, and why the next phase of automation needs more than a chat interface and optimism.

    They’re joined by John Capobianco from Itential to explore “VibeOps,” an approach to conversational operations that doesn’t throw away deterministic workflows, but connects them to agent reasoning, tool calling, and modern protocols like MCP. The discussion breaks down agent “skills” as a way to describe what an agent can do, constrain what it can’t, and build guardrails in a format teams can manage.

    From red-teaming experiments to real-world concerns about failure rates at scale, the conversation stays grounded in what it takes to make AI useful in production: external knowledge, policy alignment, composable skills, and a maturity path from lab-only to read-only to supervised execution, and only then toward autonomy. The takeaway is clear: conversational ops can accelerate work, improve documentation and ticket quality, and reduce toil, but governance and accountability still matter. If you’re navigating AIOps, agent adoption, or the post-MCP tooling wave, this episode offers a realistic starting point.

    Más Menos
    25 m
  • WebAssembly: A programmability paradigm shift
    Mar 3 2026

    Programmability is experiencing a paradigm shift, and this episode explains why WebAssembly is at the center of it. F5's Lori MacVittie and Joel Moses are joined by WebAssembly expert Oscar Spencer, a longtime contributor in the space and a leader within the Bytecode Alliance, to unpack how Wasm moved from “that browser thing” to a practical foundation for modern platforms.


    They break down what makes WebAssembly different: a secure sandbox designed for hostile environments, portable logic that can travel across architectures, and language flexibility that doesn’t force teams into obscure, proprietary scripting. The conversation also gets into why Wasm’s small footprint matters, from faster deployment to easier distribution at the edge, and how streaming compilation helps code start running quickly.


    The most timely thread is the collision between AI-driven operations and runtime safety. As agents generate more code and policies need to adapt in real time, the risk shifts from writing logic to safely executing it. Oscar makes the case that capabilities-based security and fine-grained controls can turn WebAssembly into a “blast chamber” for AI-generated code, reducing the chances that a hallucination becomes a production outage.


    If you’re thinking about plug-in architectures, safer customization, or how to scale dynamic behavior without scaling risk, this episode is your starting point.

    Check out WebAssembly Unleashed: https://www.youtube.com/playlist?list=PLyqga7AXMtPNV1zr2aTWEegep0FQU6Qvj

    Más Menos
    22 m
  • Unstructured Integration: The hidden surface area putting AI privacy & compliance at risk
    Feb 24 2026

    "It's just a chat" is the most dangerous sentence in AI. In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses are joined by data science expert Scott Hendrickson to break down why AI has the surface area of the sun—it touches search, analytics, SEO tags, ad tech, APIs, logs, and all the integrations people forget are even there.


    That’s the danger: as AI spreads across the stack, the privacy + compliance surface area explodes. What feels like a private conversation can get captured, logged, shared, or even indexed—not because of a hack, but because an old SEO/analytics integration “helpfully” records whatever shows up in a box…including chat.


    Listen in to learn how SEO/tag managers can ingest entire chat transcripts, why conversational UX breaks "transactional web" assumptions, who may end up seeing your "private" context, and actionable steps to protect AI privacy.

    Más Menos
    24 m
  • Logging for Giants: High-Speed Telemetry in an AI World
    Feb 17 2026

    When OpenAI discovered they could reclaim 30,000 CPU cores simply by tuning the log-forwarding agent Fluent Bit—disabling a single function that ate ~35 % of one server’s cycles—something large and systemic became undeniable. In this episode, F5's Lori MacVittie, Joel Moses, and observability expert, Chris Hain, break down the hidden cost of telemetry in AI-heavy architectures, why “logging is free” is a myth, and how modern systems demand a new breed of high-speed telemetry planes.


    Listen in to learn how Fluent Bit’s file-watching overhead compounded at scale, why profiling matters, and what enterprises can do now to control AI observability costs.

    Más Menos
    22 m