Episodios

  • Low-Code Automation Tools with Teeth: FlowFuse & N8N
    Feb 10 2026

    Low-code automation has grown up, and the competition is getting spicy. In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses are joined by Aubrey King as they dig into the heavyweight duel between N8N and FlowFuse—two platforms promising to empower teams to automate anything without waiting for overworked developers. We cut through the marketing fluff and look at the real differences in architecture, deployment models, extensibility, security posture, and operational experience. How do they scale? Who controls your data? And what happens when the automation breaks at 2 a.m.? If you care about automation that doesn’t collapse under real-world pressure, you’ll want to hear this.

    Read our F5 research for more on the status of automation in IT: https://www.f5.com/resources/reports/state-of-application-strategy-report

    Más Menos
    22 m
  • The New New User Interface: AI in your brain
    Feb 3 2026

    The capability to map brain activity to language isn’t just another UI shift—it’s a paradigm shift in how humans and machines might communicate. If you’re building systems that integrate or rely on neuroscience-adjacent tech (or even simply storing neuro-derived data), you’ll want to treat this as a strategic early warning: new input modalities, new risk surfaces, and new expectations of what “internal” means.

    In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses unpack emerging research on decoding neural activity into language—turning brain signals into natural-language output. They explore the promise for accessibility alongside major concerns: privacy, “intrusive thoughts,” and how systems decide which signals to surface. With a massive potential “blast radius” if connected to agentic systems, the research serves a stark reminder on the importance of evaluating AI breakthroughs for practicality and risk.

    Read the original research, Mind captioning: Evolving descriptive text of mental content from human brain activity: https://www.science.org/doi/10.1126/sciadv.adw1464

    Read the summary, "Mind-captioning" AI decodes brain activity to turn thoughts into text: https://www.nature.com/articles/d41586-025-03624-1

    Más Menos
    18 m
  • The Impact of Inference: Reliability
    Jan 27 2026

    Traditional reliability meant consistency. Given identical inputs, systems produced identical outputs. Costs were stable and behavior predictable. Inference reliability on the other hand is shaped by nondeterminism. Outputs vary due to stochastic generation, retraining introduces drift, and token-based billing can cause cost fluctuations. The new dimension of reliability is semantic consistency, that is, the ability to deliver outputs of acceptable quality, accuracy, and predictability over time despite probabilistic behavior.

    In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses are joined by guests Ken Arora and Kunal Anand as they dive into the topic of reliability in AI systems. They explore the concept of 'slop' (AI variability) as a potential feature rather than a bug, discuss the importance of contextual semantic consistency, and weigh guardrails and evals tailored to specific inference workloads. Tune in to learn how to navigate the evolving AI landscape and take note of practical tools and strategies like multi-model chaining, distillation, and prompt engineering to ensure reliability.

    Find out more in the blog How AI inference changes application delivery: https://www.f5.com/company/blog/how-ai-inference-changes-application-delivery

    Más Menos
    23 m
  • The Impact of Inference: Performance
    Jan 20 2026

    Traditional performance meant deterministic response times. Identical inputs produced near-identical execution times. Optimizations reduced latency, but variance was minimal. Insert AI inference and performance engineering has been flipped upside down. Latency depends on model size, tokenization, batching strategies, and generation settings. Identical inputs may produce different response times. The new dimension of performance is variance—not just how fast the system responds, but how response times distribute across requests, how many tokens per second are processed, and how efficient each response is relative to cost.


    In this episode of Pop Goes the Stack, Lori MacVittie, Joel Moses, and special guest Nina Forsyth dive into the impact of AI inference on measuring performance. It's time to rethink performance observability, focus on infrastructure optimization, agent-to-agent interactions, and robust measurement techniques. Listen in to learn how traditional approaches must evolve to manage this multi-dimensional puzzle.

    Más Menos
    21 m
  • The Impact of Inference: Availability
    Jan 13 2026

    What does "availability" mean in a world of AI inferencing and ever-shifting workloads? It’s no longer just about servers responding or apps being online—availability now hinges on response quality, utility, and even user perception. A fast system that delivers irrelevant or wrong answers? That’s simply unavailable to its users.


    In this episode of Pop Goes the Stack, F5's Lori MacVittie, Joel Moses, and special guest Ken Salchow explore how AI systems are changing the availability game. From the historical binary days of “up or down” to today’s nuanced measures of responsiveness and correctness, they dive into the challenges of keeping apps fast, reliable, and meaningful.


    Listen in to learn how AI inferencing workloads redefine availability metrics, why availability now requires response quality and utility, and whether or not "emotionally available" AI (yes, really) might be the future.


    Find out more in the blog, How AI inference changes application delivery: https://www.f5.com/company/blog/how-ai-inference-changes-application-delivery


    Read the white paper Ken references, Passive Monitoring—Maintaining Performance and Health: https://cdn.studio.f5.com/files/k6fem79d/production/6f4d7a0298a24927ed03c3dc92de339c86e03ef5.pdf

    Más Menos
    22 m
  • Shift left into runtime: Vibe coding and AI guardrails
    Jan 6 2026

    Coding pipelines are evolving and AI agents are taking the wheel. In this episode of Pop Goes the Stack, F5's Joel Moses teams up with Buu Lam to dive into “vibe coding”—where tools like Claude Code and GitHub Copilot plan, build, and optimize apps faster than humans can debate lint rules.

    But is faster better? While agentic AI unlocks game-changing efficiency, it also introduces new risks: API keys hardcoded into apps, runaway GitHub actions, and a stark need for guardrails like sandboxing, runtime tripwires, and logging. As we embrace smarter pipelines, how do we stay in control?

    Join us as we explore the promises and pitfalls of shifting left with AI agents—and why the era of “self-improving code” is both exciting and terrifying. The machines are coding. Are you ready to debug?

    Más Menos
    21 m
  • Taking a holiday break – Pop Goes the Stack returns after New Year’s!
    Dec 23 2025

    Hi, Pop Goes the Stack listeners! The holiday season is here, and we’re taking a short break to recharge, enjoy time with loved ones, and maybe step away from our stacks (just for a bit). Don’t worry—we’ll be back after New Year’s with more sharp insights, expert takes, and our signature snark to help you navigate the fast-paced world of application delivery and security.


    In the meantime, why not revisit some of our past episodes? From AI to cutting-edge hardware and tech industry trends, there’s plenty to dive into.


    Thank you for being part of our community. Wishing you a safe and happy holiday season—see you soon!

    Más Menos
    1 m
  • Reshaping the web for AI agents and LLMs
    Dec 16 2025

    The web we built—a tangle of HTML, JavaScript, CSS, APIs, and SEO quirks—has always been messy. But with AI agents and real-time apps now consuming the web as data, that mess becomes a liability. Firecrawl is one of the new tools reshaping how apps see and ingest web content, turning web pages into structured JSON, markdown, screenshots—everything you need for your agents to behave intelligently.


    In this episode, F5's Lori MacVittie, Joel Moses, and returning guest Aubrey King dig into how Firecrawl works and why it’s emblematic of a deeper shift: the web is no longer just for browsers. It’s now an ingestion surface, a layer to be crawled, parsed, cleaned, and trusted (or not) by your AI stacks. That means how your app presents itself—not just in UI, but in metadata, APIs, link structure, content semantics—matters more than ever.

    Más Menos
    22 m