Pop Goes the Stack Podcast Por F5 arte de portada

Pop Goes the Stack

Pop Goes the Stack

De: F5
Escúchala gratis

Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.© 2026 F5
Episodios
  • OpenClaw: Multi-agent autonomy, secrets, and blast radius
    Apr 7 2026

    OpenClaw is what happens when the industry looks at autonomous agents and decides they should have more autonomy, more persistence, and more chances to surprise you. In this episode of Pop Goes the Stack, Lori MacVittie hosts a wide-ranging discussion with F5's Joel Moses, Jason Rahm, and Kunal Anand on what makes OpenClaw different from the usual “AI assistant” narrative: agents that coordinate, remember, adapt, and operate in shared spaces where emergent behavior is a feature, not a bug.

    Joel shares a grounded example of using OpenClaw locally for home automation, keeping the blast radius contained while still seeing the upside of continuous, autonomous decision-making. From there, the group digs into what breaks when you move this model toward enterprise operations: persistence of secrets, unclear approval workflows, weak auditability, limited rollback, and the sheer difficulty of diagnosing why an agent took an action after weeks of chained decisions.

    Kunal expands the conversation to the ecosystem forming around OpenClaw, including experimental offshoots and the uncomfortable reality that “just read the code” doesn’t scale when modern projects are moving at AI-assisted commit velocity. Jason adds a longer lens, drawing a parallel to Ray Bradbury’s "There Will Come Soft Rains" as a reminder that autonomous systems can keep running even when humans stop being in the loop, raising questions beyond tech into how we relate to each other.

    Tune in for the groups practical takeaways as this technology makes it's way toward the enterprise.

    Read Kunal's blog diving into mechanistic interpretability: https://kunalanand.com/2026-03-19-your-token-is-a-wonderland/

    Read "There Will Come Soft Rains" by Ray Bradbury: https://www.btboces.org/Downloads/7_There%20Will%20Come%20Soft%20Rains%20by%20Ray%20Bradbury.pdf


    Recorded March 2nd, 2026

    Más Menos
    27 m
  • CISO Hot Takes on MCP, PQC, and Data Center Attacks
    Mar 31 2026

    Recorded live at F5 AppWorld 2026 in Las Vegas, this episode of Pop Goes the Stack puts Field CISO Chuck Herrin in the hot seat for a fast-moving conversation on what security leaders are really dealing with right now. Joel Moses kicks things off with the agentic AI debate: if teams bypass structured tool interfaces and let agents “just use the CLI,” what happens to authentication, observability, and predictability when autonomy accelerates faster than humans can keep up?


    From there, Chuck makes the case that fear is a poor long-term strategy for running a business, even when the threats are real. He unpacks the tension he’s seeing across organizations, where executives are driven by FOMO while employees wrestle with FOBO (fear of becoming obsolete), and argues that companies get results when they redesign how they operate rather than bolting AI onto old structures.


    The conversation shifts to post-quantum cryptography and why it still isn’t getting the attention it deserves. Chuck explains how “future tech” framing, short CISO tenures, and the pressure of today’s fires keep PQC from becoming a priority, even as harvest-now-decrypt-later attacks make it a present-day risk. His advice is practical: assign clear ownership, treat the effort like business continuity planning, and include your supply chain in the readiness scope.


    Finally, they touch on a new class of concern for CISOs: kinetic targeting of data center infrastructure, and how sovereignty requirements can constrain options when physical risk rises. If you’re navigating AI adoption, cryptographic transition, or resilience planning, tune in for a grounded perspective from the show floor.

    Más Menos
    17 m
  • AI Red Teaming in Practice: Scores, guardrails, auto-remediation
    Mar 24 2026

    AI in production isn’t just another feature to ship. It’s a non-deterministic system that can be socially engineered, fuzzed, and pushed into failure states you won’t find with traditional testing. Recorded live in Las Vegas at F5’s AppWorld 2026, this episode of Pop Goes the Stack brings Joel Moses together with Jimmy White, F5’s VP of AI Security (via the CalypsoAI acquisition), for a practical look at what AI red teaming actually is and how it works when the attacker is an agent.

    Jimmy reframes genAI security as a permutation problem: if there are countless prompt combinations that could unlock sensitive data or trigger unsafe actions, you need genAI-powered red team agents to explore those paths at scale. The discussion covers custom intents, agentic “fingerprints” that reveal not just what was compromised but how it happened, and why that “how” is the key to building protections you can trust.

    You’ll also hear how scoring and reporting translate into guardrails, how auto-remediation can be validated with positive and negative test cases before a human publishes changes, and why relying on models to internalize safety isn’t a realistic plan. The conversation closes on agentic AI risk, where tools and permissions matter more than the model’s reasoning, and introduces “thought injection” as a way to redirect unsafe actions without breaking the agent loop.

    If you’re building AI apps, deploying MCP-connected systems, or worrying about agents becoming tomorrow’s service accounts, this episode gives you a sharper playbook for testing, governance, and resilience.

    Más Menos
    27 m
Todavía no hay opiniones