Episodios

  • Holiday hiatus: Revisiting tech trends, AI, and more!
    Nov 25 2025

    Hi everyone! This is Lori MacVittie, host of Pop Goes the Stack. This holiday week, we’re pressing pause to recharge, spend time with loved ones, and maybe even step away from our stacks for a bit (gasp!).


    But don’t worry—we’ll be back soon with more:

    ✅ Sharp insights into emerging tech

    ✅ Expert takes on application delivery & security

    ✅ And, of course, our signature snark


    Missed an episode? Use this time to revisit some of our favorite discussions, covering everything from:


    - AI advancements

    - Game-changing hardware trends

    - Cybersecurity challenges

    - And much more!


    Thank you for being part of the Pop Goes the Stack community. Wishing you a safe, happy holiday. We can’t wait to see you soon with fresh episodes to keep you ahead in the ever-evolving world of tech.


    👉 Subscribe now to make sure you don’t miss our return.


    🎉 Happy holidays from Lori and the entire Pop Goes the Stack team!

    Más Menos
    1 m
  • BOLA exploits: The #1 API threat and how to stop it
    Nov 18 2025

    The 2025 API Threat Report is out, and shocker: we’re still getting wrecked by injection, data leaks, and BOLA. That’s Broken Object Level Authorization, for those of you keeping score at home. And here’s the kicker—95% of these attacks are coming through authenticated sessions. Translation: the bad guys aren’t breaking in through the side door, they’re waltzing in with a valid badge and looting the place. But sure, let’s keep obsessing over password complexity policies while ignoring that our APIs are basically vending machines for sensitive data.


    In this episode, F5's Lori MacVittie, Joel Moses, and special guest Garland Moore dive into BOLA misconceptions, the impact of AI, and solutions you can implement now to mitigate risk.

    Más Menos
    22 m
  • MCP tools and AI risks: The case for slow, secure adoption
    Nov 11 2025

    Remember when APIs were quiet little endpoints that waited politely for humans to click buttons? Yeah, that’s over. Now you’ve got swarms of LLM agents duct-taping tools together like caffeinated interns on Red Bull, firing off recursive calls at 3 a.m., and cheerfully melting your infrastructure while insisting everything is “working as intended.” Observability dashboards are screaming, rate limits are sobbing in the corner, and your security model still thinks it’s guarding humans instead of self-directed toolchains with the attention span of a squirrel and root access. Welcome to the new game: not keeping the stack up, but keeping it from eating itself.


    In this episode of Pop Goes the Stack, F5's Lori MacVittie and returning guest Connor Hicks discuss the rapid adoption of MCP and the risks of going too fast without considering security, governance, and supply chain pitfalls. Listen now to take control of MCP tools and AI agents before they take over.


    And after you've listened to the episode, check out our WebAssembly Unleashed podcast: https://youtube.com/playlist?list=PLyqga7AXMtPNV1zr2aTWEegep0FQU6Qvj&si=YZkHT7VeqfrANeZO

    Más Menos
    21 m
  • LLM-as-a-Judge: Bias, Preference Leakage, and Reliability
    Nov 4 2025

    Here's the newest bright idea in AI: don’t pay humans to evaluate model outputs, let another model do it. This is the “LLM-as-a-judge” craze. Models not just spitting answers but grading them too, like a student slipping themselves the answer key. It sounds efficient, until you realize you’ve built the academic equivalent of letting someone’s cousin sit on their jury. The problem is called preference leakage. Li et al. nailed it in their paper “Preference Leakage: A Contamination Problem in LLM-as-a-Judge.” They found that when a model judges an output that looks like itself—same architecture, same training lineage, or same family—it tends to give a higher score. Not because the output is objectively better, but because it “feels familiar.” That’s not evaluation, that’s model nepotism.

    In this episode of Pop Goes the Stack, F5's Lori MacVittie, Joel Moses, and Ken Arora explore the concept of preference leakage in AI judgement systems. Tune in to understand the risks, the impact on the enterprise, and actionable strategies to improve model fairness, security, and reliability.

    Más Menos
    22 m
  • We're on a brief hiatus, we'll be back soon
    Oct 21 2025

    We’re on a brief hiatus. But don’t worry—we’ll be back shortly with more sharp insights, expert takes, and of course Lori's signature snark to help you navigate the ever-evolving world of application delivery and security.

    Más Menos
    1 m
  • Bots vs Business: AI Fraud & Defending Your Margins
    Oct 14 2025

    A North Carolina musician was arrested after using AI to generate fake bands and bots to stream their songs—racking up over a billion plays and pocketing $10 million in fraudulent royalties. It’s the first U.S. case of AI-driven music streaming fraud, and it’s less about music than it is about bots exploiting business models.


    For enterprises, the lesson is simple: if you treat all traffic as legitimate, bots will eat your margins. With AI making bot behavior increasingly human-like, traditional defenses like packet filtering or basic behavior analysis are no longer enough.


    In this episode, Lori MacVittie is joined by Principal Threat Researcher, Malcolm Heath, to dive into the challenges of defending against AI-driven bots, especially as tools and agentic AI make attacks more sophisticated. They uncover key strategies to identify and neutralize bots while exploring the evolving role of observability and behavioral detection in enterprise security.

    Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech: https://www.f5.com/company/octo

    Read more about the AI Music Fraud case: https://www.wired.com/story/ai-bots-streaming-music/?utm_source=chatgpt.com

    Más Menos
    22 m
  • Crossing the streams
    Oct 7 2025

    Prompt injection isn't some new exotic hack. It’s what happens when you throw your admin console and your users into the same text box and pray the intern doesn’t find the keys to production. Vendors keep chanting about “guardrails” like it’s a Harry Potter spell, but let’s be real—if your entire security model is “please don’t say ignore previous instructions,” you’re not doing security, you’re doing improv.


    So we're digging into what it actually takes to keep agentic AI from dumpster-diving its own system prompts: deterministic policy engines, mediated tool use, and maybe—just maybe—admitting that your LLM is not a CISO. Because at the end of the day, you can’t trust a probabilistic parrot to enforce your compliance framework. That’s how you end up with a fax machine defending against a DDoS—again.


    The core premise here is that prompt injection is not actually injection, it's system prompt manipulation—but it's not a bug, it's by design. There's a GitHub repo full of system prompts extracted by folks and a number of articles on "exfiltration" of system prompts. Join F5's Lori MacVittie, Joel Moses, and Jason Williams as they explain why it's so easy, why it's hard to prevent, and possible mechanisms for constraining AI to minimize damage. Cause you can't stop it. At least not yet.

    Más Menos
    21 m
  • Agentic APIs Have PTSD
    Sep 30 2025

    Your APIs were designed for humans and orderly machines: clean request, tidy response, stateless, rate-limited. Then along came agentic AI—recursive, stateful, jittery little things that retry forever, chain calls together, and dream up new query paths at 3 a.m.

    The result? Your APIs start looking less like infrastructure and more like trauma patients. Rate limits collapse. Monitoring floods. Security controls meant for human logins don’t make sense when the caller is a bot acting on its own intent.

    The punchline: enterprises aren’t serving users anymore, they’re serving swarms of other AIs. If you don’t rethink throttling, observability, and runtime policy, your endpoints are going to get steamrolled.

    Join host Lori MacVittie and F5 guest Connor Hicks to explore how enterprises can adapt and thrive—hit play now to future-proof your APIs!

    Read AI Agentic workflows and Enterprise APIs: Adapting API architectures for the age of AI agents: https://arxiv.org/abs/2502.17443

    Más Menos
    22 m