Episodios

  • The Product and Service Story That Every Scrum Master Needs to Hear | Lai-Ling Su
    Feb 23 2026
    Lai-Ling Su: The Product and Service Story That Every Scrum Master Needs to Hear

    Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

    "It was kind of at that moment that I realized, like, community was about providing people with the opportunities that they otherwise wouldn't have had. And whilst you could technically execute your product or service well, the customer experience is fundamentally a deeply emotional one." - Lai-Ling Su

    Lai-Ling shares a powerful story from when she was just 11 years old, running front of house at her family's restaurant inside an Australian workers' club. When a popular band was booked to play on a Saturday night, the venue reached max capacity—and almost everyone wanted food. With no ticketed order system and only her memory to match orders to customers, chaos ensued.

    One father approached her, yelling about how long his food was taking. At the end of the night, Lai-Ling mustered the courage that only an 11-year-old possesses and asked him point-blank why he had reacted so strongly. His answer floored her: he only got to see his son every other weekend, and this evening was supposed to create a cherished memory together. Instead, they were hangry most of the night.

    This moment taught Lai-Ling that customer experience is fundamentally emotional—it's not about the food, but about what the interaction means to the people we serve. For the next decade, she continuously inspected every aspect of their restaurant operations, always seeking to improve how they served customers while remaining commercially viable.

    In this episode, we refer to the "Scrum Masters are the future CEO's, and a podcast by the Lean Enterprise Institute" blog post by Vasco.

    Self-reflection Question: When was the last time you paused to understand the deeper meaning behind a stakeholder's frustration, rather than just addressing the surface-level complaint?

    [The Scrum Master Toolbox Podcast Recommends]

    🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

    Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

    🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

    Buy Now on Amazon

    [The Scrum Master Toolbox Podcast Recommends]

    About Lai-Ling Su

    Lai-Ling fixes the gap between operating model design and real-world delivery through her interim executive, consulting, capability building, and executive coaching work. She also equips product and transformation leaders with the capability everyone expects but no one teaches - how to navigate the people, politics, and performance expectations that come with their jobs.

    You can link with Lai-Ling Su on LinkedIn.

    Más Menos
    19 m
  • BONUS From Combat Pilot to Scrum Master - How Military Leadership Transforms Agile Teams With Nate Amidon
    Feb 21 2026
    BONUS: From Combat Pilot to Scrum Master - How Military Leadership Transforms Agile Teams In this bonus episode, we explore a fascinating career transition with Nate Amidon, a former Air Force combat pilot who now helps software teams embed military-grade leadership principles into their Agile practices. Nate shares how the high-stakes discipline of aviation translates directly into building high-performing development teams, and why veterans make exceptional Scrum Masters. The Brief-Execute-Debrief Cycle: Aviation Meets Agile "We would mission brief in the morning and make sure everyone was on the same page. Then we problem-solved our way through the day, debriefed after, and did it again. When I learned about what Agile was, I realized it's the exact same thing." Nate's transition from flying C-17 cargo planes to working with Agile teams wasn't as jarring as you might expect. Flying missions that lasted 2-3 weeks with a crew of 5-7 people taught him the fundamentals of iterative work: daily alignment, continuous problem-solving, and regular reflection. The brief-execute-debrief cycle that every military pilot learns mirrors the sprint cadence that Agile teams follow. Time-boxing wasn't new to him either—when you're flying, you only have so much fuel, so deadlines aren't arbitrary constraints but physical realities that demand disciplined execution. In this episode with Christian Boucousis, we also discuss the brief-execute-debrief cycle in detail. In this segment, we also refer to Cynefin, and the classification of complexity. Alignment: The Real Purpose Behind Ceremonies "It's really important to make sure everyone understands why you're doing what you're doing. We don't brief, execute, debrief just because—we do it because we know that getting everybody on the same page is really important." One of the most valuable insights Nate brings to his work with software teams is the understanding that Agile ceremonies aren't bureaucratic checkboxes—they're alignment mechanisms. The purpose of sprint planning, daily stand-ups, and retrospectives is to ensure everyone knows the mission and can adapt when circumstances change. Interestingly, Nate notes that as teams become more high-performing, briefings get shorter and more succinct. The discipline remains, but the overhead decreases as shared context grows. The Art of Knowing When to Interrupt "There are times when you absolutely should not interrupt an engineer. Every shoulder tap is a 15-minute reset for them to get back into the game. But there are also times when you absolutely should shoulder tap them." High-performing teams understand the delicate balance between deep work and necessary communication. Nate shares an aviation analogy: when loadmasters are loading complex cargo like tanks and helicopters, interrupting them with irrelevant updates would be counterproductive. But if you discover that cargo shouldn't be on the plane, that's absolutely worth the interruption. This judgment—knowing what matters enough to break flow—is something veterans develop through high-stakes experience. Building this awareness across a software team requires: Understanding what everyone is working on Knowing the bigger picture of the mission Creating psychological safety so people feel comfortable speaking up Developing shared context through daily stand-ups and retrospectives Why Veterans Make Exceptional Scrum Masters "I don't understand why every junior officer getting out of the military doesn't just get automatically hired as a Scrum Master. If you were to say what we want a Scrum Master to do, and what a junior military officer does—it's line for line." Nate's company, Form100 Consulting, specifically hires former military officers and senior NCOs for Agile roles, often bringing them on without tech experience. The results consistently exceed expectations because veterans bring foundational leadership skills that are difficult to develop elsewhere: showing up on time, doing what you say you'll do, taking care of team members, seeing the forest through the trees. These intangible qualities—combined with the ability to stay calm, listen actively, and maintain integrity under pressure—make for exceptional servant leaders in the software development space. The Onboarding Framework for Veterans "When somebody joins, we have assigned everybody a wingman—a dedicated person that they check in with regularly to bounce ideas off, to ask questions." Form100's approach to transitioning veterans into tech demonstrates the same principles they advocate for Agile teams. They screen carefully for the right personality fit, provide dedicated internal training on Agile methodologies and program management, and pair every new hire with a wingman. This military unit culture helps bridge the gap between active duty service and the private sector, addressing one of the biggest challenges: the expectation gap around ...
    Más Menos
    35 m
  • BONUS From Individual AI Wins to Team-Wide Transformation With Monica Marquez
    Feb 20 2026
    BONUS: From Individual AI Wins to Team-Wide Transformation What happens when the leaders we trust to guide transformation become the bottleneck slowing it down? In this episode, Monica Marquez—with 25+ years in people transformation at Goldman Sachs, Google, and beyond—reveals why the old equation of effort equals success is breaking down, and what leaders must unlearn to thrive in the age of AI. The Leadership Crisis Nobody Trained You For "No one ever really teaches you what it really takes to be a leader. You know what you do really well, but how do you help other people do that too? That's when I realized it comes down to becoming a really good leader." Monica's origin story captures a universal struggle: being promoted for technical excellence, then discovering that leading people requires completely different skills. She spent her career at organizations like Goldman Sachs, Bank of America, Ernst & Young, and Google realizing that systems weren't built for everyone—and that the real work of leadership is redesigning those systems to unlock human potential. Today, through her company Flipwork, she helps leaders and teams become what she calls "agentic humans"—people who leverage AI to get ahead rather than getting left behind. The Command and Control Trap "Most leadership development still rewards the command and control archetype. The person who has all the answers, the decisive hero. But AI moves so fast that when you think you've fixed something, it changes the next day. Leaders are starting to become bottlenecks." The research shows the problem clearly: middle management is where AI adoption stalls. These leaders cling to command and control because relinquishing it feels like losing their value. Worse, they have an unspoken fear of managing AI agents—they don't want to be liable for outputs they don't fully control. Monica reframes this: treat your AI tools like an artificial intern, not artificial intelligence. You wouldn't take an intern's first draft and hand it to leadership. You train them, provide context, and finesse the output. The same discipline applies to LLMs. Rewriting the Success Equation "Effort = success is the old equation. That's pre-AI. The new equation is impact equals success. Output equals success, and impact equals worth." This might be the most important shift leaders need to make. When tasks that took 4 hours now take 30 minutes, deeply conditioned beliefs about work ethic get threatened. Monica sees leaders questioning their worth because they're producing faster. "I was always taught I have to work twice as hard to get half as far," she shares. "Now what used to take me 10 hours, I can get done in 4. Am I not worthy anymore of being a high performer?" The answer is to measure impact, not effort—and that requires rewiring beliefs that may be decades old. Why Individual AI Adoption Doesn't Scale "Teams are using AI as individual contributors, but they aren't using AI in their actual workflows and the handoffs. That's why leaders are scratching their heads, like, why aren't we seeing the ROI bubble up into the team?" Here's the gap most organizations miss: individuals save an hour or two per day using AI for personal productivity, but the team never sees compounding benefits. The handoffs between team members remain manual. The friction points persist. Monica's solution is "flip labs"—90-day sprints where teams take one critical workflow, dissect it, and rebuild it with AI. Where can AI handle the $10 tasks so humans can focus on $10,000 decisions? Where should humans remain in the loop? IKEA did this with customer service, retraining displaced workers into design roles. Revenue increased without adding headcount. Leading Through Uncertainty "We're humans wired for certainty, but Agile is a system designed for uncertainty. That's where the behavioral psychology comes in—how do you help people move forward despite the uncertainty?" The fundamental challenge is biological: our brains seek certainty, but the only certain thing now is that change will come faster than we can adapt. Monica works with teams to create psychologically safe spaces for experimentation—AB testing old workflows against AI-augmented ones, measuring outputs, and learning from failures. "Sometimes we learn more from the failures than we do the successes," she notes. The leaders who create permission for testing and learning will pull ahead; those who demand control will become the bottleneck that slows their entire organization. About Monica Marquez Monica Marquez is a leadership and workplace AI advisor with 25+ years in people transformation. She coined the "returnship" at Goldman Sachs, helped found Google's Product Inclusion Council, and now guides leaders and teams to adopt AI, agile, and inclusion practices that drive results through her company Flipwork, Inc. You can connect with Monica Marquez on LinkedIn and subscribe to her Ay, Ay,...
    Más Menos
    33 m
  • BONUS The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception With Daniel Sodickson
    Feb 19 2026
    BONUS: The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception What if the next leap in AI isn't about thinking, but about seeing? In this episode, Daniel Sodickson—physicist, medical imaging pioneer, and author of "The Future of Seeing"—argues we're on the edge of a vision revolution that will change medicine, technology, and even human perception itself. From Napkin Sketch to Parallel Imaging "I was doodling literally on a napkin in a piano bar in Boston and came up with a way to get multiple lines at once. I ran to my mentor and said, 'Hey, I have this idea, never mind my paper.' And he said, 'Who are you again? Sure, why not.' And it worked." Daniel's journey into imaging began with a happy accident. While studying why MRI couldn't capture the beating heart fast enough, he realized the fundamental bottleneck: MRI machines scan one line at a time, like old CRT screens. His insight—imaging in parallel to capture multiple lines simultaneously—revolutionized the field. This connection between natural vision (our eyes capture entire scenes at once) and artificial imaging systems set him on a 29-year journey exploring how we can see what was once invisible. Upstream AI: Changing What We Measure "Most often when we envision AI, we think of it as this downstream process. We generate our data, make our image, then let AI loose instead of our brains. To me, that's limited. Why aren't we thinking of tasks that AI can do that no human could ever do?" Daniel introduces a crucial distinction between "downstream" and "upstream" AI. Downstream AI takes existing images and interprets them—essentially competing with human experts. Upstream AI changes the game entirely by redesigning what data we gather in the first place. If we know a machine learning system will process the output, we can build cheaper, more accessible sensors. Imagine monitoring devices built into beds or chairs that don't produce perfect images but can detect whether you've changed since your last comprehensive scan. AI fills in the gaps using learned context about how bodies and signals behave. The Power of Context and Memory "The world we see is a lie. Two eyes are not nearly enough to figure out exactly where everything is in space. What the brain is doing is using everything it's learned about the world—how light falls on surfaces, how big people are compared to objects—and filling in what's missing." Our brains don't passively receive images; they actively construct reality using massive amounts of learned context. Daniel argues we can give imaging machines the same superpower. By training AI on temporal patterns—how healthy bodies change over time, what signals precede disease—we create systems with "memory" that can make sophisticated judgments from incomplete data. Today's signal, combined with your history and learned patterns from millions of others, becomes far more informative than any single pristine image could be. From Reactive to Proactive Health "I've started to wonder why we use these amazing MRI machines only once we already know you're sick. Why do we use them reactively rather than proactively?" This question drove Daniel to leave academia after 29 years and join Function Health, a company focused on proactive imaging and testing to catch disease before it develops. The vision: a GPS for your health. By combining regular blood panels, MRI scans, and wearable data, AI can monitor whether you look like yourself or have changed in worrisome ways. The goal isn't replacing expert diagnosis but creating an early warning system that surfaces problems while they're still easily treatable. Seeing How We See "Sometimes when I'm walking along, everything I'm seeing just fades away. And what I see instead is how I'm seeing. I imagine light bouncing off of things and landing in my eye, this buzz of light zipping around as fast as anything in the universe can go." After decades studying vision, Daniel experiences the world differently. He finds himself deconstructing his own perception—tracing sight lines, marveling at how we've evolved to turn chaos of sensation into spatially organized information. This meta-awareness extends to his work: every new imaging modality has driven scientific discovery, from telescopes enabling the Copernican Revolution to MRI revealing the living body. We're now at another inflection point where AI doesn't just interpret images but transforms our relationship with perception itself. In this episode, we refer to An Immense World: How Animal Senses Reveal the Hidden Realms Around Us by Ed Young on animal perception, and A Path Towards Autonomous Machine Intelligence by Yann LeCun on building AI more like the brain. About Daniel Sodickson Daniel K. Sodickson is a physicist in medicine and chief medical scientist at Function Health. Previously at NYU, and a gold medalist and past president of the International Society for ...
    Más Menos
    37 m
  • AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed With Eduardo Ferro
    Feb 18 2026
    AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed What happens when you combine nearly 30 years of engineering experience with AI-assisted coding? In this episode, Eduardo Ferro shares his experiments showing that AI doesn't replace good practices—it amplifies them. The result: doubled productivity while spending four times more on code quality. Vibe Coding vs Production-Grade AI Development "Vibe coding is flow-driven, curiosity-based way of building software with AI. It's less about meticulously reviewing each line of code, and more about letting the AI steer the process—perfect for quick experiments, side projects, MVPs, and prototypes." Edu draws a clear distinction between vibe coding and production AI development. Vibe coding is exploration-focused, where you let AI drive while you learn and discover. Production AI coding is goal-focused, with careful planning, spec definition, and identification of edge cases before implementation. Both use small, safe steps and continuous conversation with the AI, but production code demands architectural thinking, security analysis, and sustainability practices. The key insight is that even vibe coding benefits from engineering discipline—as experiments grow, you need sustainable practices to maintain flexibility. How AI Doubled My Productivity "I was investing four times more in refactoring, cleanup, deleting code, introducing new tests, improving testability, and security analysis than in generating new features. And at the same time, globally, I think I more or less doubled my pace of work." Edu's two-month experiment with production code revealed a counterintuitive finding: by spending 4x more time on code quality activities—refactoring, cleanup, test improvement, and security analysis—he actually doubled his overall delivery speed. The secret lies in fast feedback loops. With AI, you can implement a feature, run automated code review, analyze security, prioritize improvements, and iterate—all within an hour. What used to be a day's work happens in a single focused session, and the quality improvements compound over time. The Positive Spiral of Code Removal "We removed code, so we removed all the features that were not being used. And whenever I remove this code, the next step is to automatically try to see, okay, can I simplify the architecture." One of the most powerful practices Edu discovered is using AI to accelerate code removal. By connecting product analytics to identify unused features, then using AI to quickly remove them, you trigger a positive spiral: removing code makes architecture changes easier, easier architecture changes enable faster feature development, which leads to more opportunities for simplification. This creates a self-reinforcing cycle that humans historically have been reluctant to pursue because removal was as expensive as creation. Preparing the System Before Introducing Change "What I want to generate is this new functionality—how should I change my system to make it super easy to introduce this one? It's not about making the change, it's about making the change easy." Edu describes a practice that was previously too expensive: preparing the system before introducing changes. By analyzing architecture decision records, understanding the existing design, and adapting the codebase first, new features become trivial to implement. AI makes this preparation cheap enough to do routinely. The result is systems that evolve cleanly rather than accumulating technical debt with each new feature. AI as an Amplifier: The Double-Edged Sword "AI is an amplifier. People who already know how to develop software well will continue to develop it well and faster. People who did not know how to develop software well will probably get in trouble much faster than they would otherwise." Edu's central metaphor is AI as an amplifier—it doesn't replace engineering judgment, it magnifies its presence or absence. Teams with strong practices will see accelerated improvement; teams without them will generate technical debt faster than ever. This has implications beyond individual productivity: the market will be saturated with solutions, making product discovery and distribution channels more important than implementation capability. In this episode, we refer to Edu's blog post Fast Feedback, Fast Features: My AI Assisted Coding Experiment and Vibe Coding by Gene Kim. About Eduardo Ferro Edu Ferro is Head of Engineering and Data Platform at ClarityAI, with nearly 30 years' experience. He helps teams deliver value through Lean, XP, and DevOps, blending technical depth with product thinking. Recently he explores AI-assisted product development, sharing insights and experiments on his site eferro.net. You can connect with Edu Ferro on LinkedIn.
    Más Menos
    33 m
  • AI Assisted Coding: Stop Building Features, Start Building Systems with AI With Adam Bilišič
    Feb 17 2026
    AI Assisted Coding: Stop Building Features, Start Building Systems with AI What separates vibe coding from truly effective AI-assisted development? In this episode, Adam Bilišič shares his framework for mastering AI-augmented coding, walking through five distinct levels that take developers from basic prompting to building autonomous multi-agent systems. Vibe Coding vs AI-Augmented Coding: A Critical Distinction "The person who is actually creating the app doesn't have to have in-depth overview or understanding of how the app works in the background. They're essentially a manual tester of their own application, but they don't know how the data structure is, what are the best practices, or the security aspects." Adam draws a clear line between vibe coding and AI-augmented coding. Vibe coding allows non-developers to create functional applications without understanding the underlying architecture—useful for product owners to create visual prototypes or help clients visualize their ideas. AI-augmented coding, however, is what professional software engineers need to master: using AI tools while maintaining full understanding of the system's architecture, security implications, and best practices. The key difference is that augmented coding lets you delegate repetitive work while retaining deep knowledge of what's happening under the hood. From Building Features to Building Systems "When you start building systems, instead of thinking 'how can I solve this feature,' you are thinking 'how can I create either a skill, command, sub-agent, or other things which these tools offer, to then do this thing consistently again and again without repetition.'" The fundamental mindset shift in AI-augmented coding is moving from feature-level thinking to systems-level thinking. Rather than treating each task as a one-off prompt, experienced practitioners capture their thinking process into reusable recipes. This includes documenting how to refactor specific components, creating templates for common patterns, and building skills that encode your decision-making process. The goal is translating your coding practices into something the AI can repeatedly execute for any new feature. Context Management: The Critical Skill For Working With AI "People have this tendency to install everything they see on Reddit. They never check what is then loaded within the context just when they open the coding agent. You can check it, and suddenly you see 40 or 50% of your context is taken just by MCPs, and you didn't do anything yet." One of the most overlooked aspects of AI-assisted coding is context management. Adam reveals that many developers unknowingly fill their context window with MCP (Model Context Protocol) tools they don't need for the current task. The solution is strategic use of sub-agents: when your orchestrator calls a front-end sub-agent, it gets access to Playwright for browser testing, while your backend agent doesn't need that context overhead. Understanding how to allocate context across specialized agents dramatically improves results. The Five Levels of AI-Augmented Coding "If you didn't catch up or change your opinion in the last 2-3 years, I would say we are getting to the point where it will be kind of last chance to do so, because the technology is evolving so fast." Adam outlines a progression from beginner to expert: Level 1 - Master of Prompts: Learning to write effective prompts, but constantly repeating context about architecture and preferences Level 2 - Configuration Expert: Using files like .cursorrules or CLAUDE.md to codify rules the agent should always follow Level 3 - Context Master: Understanding how to manage context efficiently, using MCPs strategically, creating markdown files for reusable information Level 4 - Automation Master: Creating custom commands, skills, and sub-agents to automate repetitive workflows Level 5 - The Orchestrator: Building systems where a main orchestrator delegates to specialized sub-agents, each running in their own context window The Power of Specialized Sub-Agents "The sub-agent runs in his own context window, so it's not polluted by whatever the orchestrator was doing. The orchestrator needs to give him enough information so it can do its work." At the highest level, developers create virtual teams of specialized agents. The orchestrator understands which sub-agent to call for front-end work, which for backend, and which for testing. Each agent operates in a clean context, focused on its specific domain. When the tester finds issues, it reports back to the orchestrator, which can spin up the appropriate agent to fix problems. This creates a self-correcting development loop that dramatically increases throughput. In this episode, we refer to the Claude Code subreddit and IndyDevDan's YouTube channel for learning resources. About Adam Bilišič Adam Bilišič is a former CTO of a Swiss company with over 12 ...
    Más Menos
    37 m
  • When AI Decisions Go Wrong at Scale—And How to Prevent It With Ran Aroussi
    Feb 16 2026
    BONUS: When AI Decisions Go Wrong at Scale—And How to Prevent It We've spent years asking what AI can do. But the next frontier isn't more capability—it's something far less glamorous and far more dangerous if we get it wrong. In this episode, Ran Aroussi shares why observability, transparency, and governance may be the difference between AI that empowers humans and AI that quietly drifts out of alignment. The Gap Between Demos and Deployable Systems "I've noticed that I watched well-designed agents make perfectly reasonable decisions based on their training, but in a context where the decision was catastrophically wrong. And there was really no way of knowing what had happened until the damage was already there." Ran's journey from building algorithmic trading systems to creating MUXI, an open framework for production-ready AI agents, revealed a fundamental truth: the skills needed to build impressive AI demos are completely different from those needed to deploy reliable systems at scale. Coming from the EdTech space where he handled billions of ad impressions daily and over a million concurrent users, Ran brings a perspective shaped by real-world production demands. The moment of realization came when he saw that the non-deterministic nature of AI meant that traditional software engineering approaches simply don't apply. While traditional bugs are reproducible, AI systems can produce different results from identical inputs—and that changes everything about how we need to approach deployment. Why Leaders Misunderstand Production AI "When you chat with ChatGPT, you go there and it pretty much works all the time for you. But when you deploy a system in production, you have users with unimaginable different use cases, different problems, and different ways of phrasing themselves." The biggest misconception leaders have is assuming that because AI works well in their personal testing, it will work equally well at scale. When you test AI with your own biases and limited imagination for scenarios, you're essentially seeing a curated experience. Real users bring infinite variation: non-native English speakers constructing sentences differently, unexpected use cases, and edge cases no one anticipated. The input space for AI systems is practically infinite because it's language-based, making comprehensive testing impossible. Multi-Layered Protection for Production AI "You have to put in deterministic filters between the AI and what you get back to the user." Ran outlines a comprehensive approach to protecting AI systems in production: Model version locking: Just as you wouldn't randomly upgrade Python versions without testing, lock your AI model versions to ensure consistent behavior Guardrails in prompts: Set clear boundaries about what the AI should never do or share Deterministic filters: Language firewalls that catch personal information, harmful content, or unexpected outputs before they reach users Comprehensive logging: Detailed traces of every decision, tool call, and data flow for debugging and pattern detection The key insight is that these layers must work together—no single approach provides sufficient protection for production systems. Observability in Agentic Workflows "With agentic AI, you have decision-making, task decomposition, tools that it decided to call, and what data to pass to them. So there's a lot of things that you should at least be able to trace back." Observability for agentic systems is fundamentally different from traditional LLM observability. When a user asks "What do I have to do today?", the system must determine who is asking, which tools are relevant to their role, what their preferences are, and how to format the response. Each user triggers a completely different dynamic workflow. Ran emphasizes the need for multi-layered access to observability data: engineers need full debugging access with appropriate security clearances, while managers need topic-level views without personal information. The goal is building a knowledge graph of interactions that allows pattern detection and continuous improvement. Governance as Human-AI Partnership "Governance isn't about control—it's about keeping people in the loop so AI amplifies, not replaces, human judgment." The most powerful reframing in this conversation is viewing governance not as red tape but as a partnership model. Some actions—like answering support tickets—can be fully automated with occasional human review. Others—like approving million-dollar financial transfers—require human confirmation before execution. The key is designing systems where AI can do the preparation work while humans retain decision authority at critical checkpoints. This mirrors how we build trust with human colleagues: through repeated successful interactions over time, gradually expanding autonomy as confidence grows. Building Trust Through Incremental Autonomy "Working ...
    Más Menos
    41 m
  • BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake With Chris Degnan
    Feb 14 2026
    BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake In this episode, we talk about what it really takes to scale go-to-market from zero to billions. We interview Chris Degnan, a builder of one of the most iconic revenue engines in enterprise software at Snowflake. This conversation is grounded in the transformation described in his book Make It Snow—the journey from early-stage chaos to durable, aligned growth. Embedding Sales with Engineering While Still in Stealth "I don't expect you to sell anything for 2 years. What I really want you to do is get a ton of feedback and get customers to use the product so that when we come out of stealth mode, we have this world-class product." Chris joined Snowflake when there were zero customers and the company was still in stealth mode. The counterintuitive move of embedding sales next to engineering so early wasn't about driving immediate revenue, it was about understanding product-market fit. Chris's job was to get customers to try the product, use it for free, and break it. And break it they did. This early feedback led to material changes in the product before general availability. The approach helped shape their ideal customer profile (ICP) and gave the engineering team real-world validation that shaped Snowflake's technical direction. In a world where startups are pressured to show revenue immediately, Snowflake's investors took the opposite approach: focus on building a product people cannot live without first. Why Sales and Marketing Alignment Is Existential "If we're not driving revenue, if the revenue is not growing, then how are we going to be successful? Revenue was king." When Denise Persson joined as CMO, she shifted the conversation from marketing qualified leads (MQLs) to qualified meetings for the sales team. This simple reframe eliminated the typical friction between sales and marketing. Both leaders shared challenges openly and held each other accountable. When someone in either organization wasn't being respectful to the other team, they addressed it directly. Chris warns founders against creating artificial friction between sales and marketing: "A lot of founders who are engineers think that they want to create this friction between sales and marketing. And that's the opposite instinct you should have." The key insight is treating sales and marketing as a symbiotic system where revenue is the shared north star. Coaching Leaders Through Hypergrowth "If there's a problem in one of our organizations, if someone comes with a mentality that is not great for us, we're gonna give direct feedback to those people." Chris and Denise maintained tight alignment at the top level of their organizations through four CEO transitions. Their partnership created a culture of accountability that cascaded through both teams. When either hired senior people who didn't fit the culture, they investigated and addressed it. The coaching approach wasn't about winning by authority—it was about maintaining partnership and shared accountability for results. This required unlearning traditional management approaches that pit departments against each other and instead fostering genuine collaboration. Cultural Behaviors That Scale (And Those That Don't) "We got dumb and lazy. We forgot about it. And then we decided, hey, we're gonna go get a little bit more fit, and figure out how to go get the new logos again." Chris describes himself as a "velocity salesperson" with a hyper-focus on new customer acquisition. This focus worked brilliantly during Snowflake's growth phase—land customers, and the high net retention rate would drive expansion. However, as Snowflake prepared to go public, they took their foot off the gas on new logo acquisition, believing not all new logos were equal. This turned out to be a mistake. In his final year at Snowflake, working with CEO Sridhar Ramaswamy, they redesigned the sales team to reinvigorate the new logo acquisition machine. The lesson: the cultural behaviors that fuel early success must be consciously maintained and sometimes redesigned as you scale. Keeping the Message Narrow Before Going Platform "Eventually, I know you want to be a platform. But having a targeted market when you're initially launching the company, that people are spending money on, makes it easier for your sales team." Snowflake intentionally positioned itself in the enterprise data warehousing market—a $10-12 billion annual market with 5,000-7,000 enterprise customers—rather than trying to sound "bigger" as a platform play. The strategic advantage was accessing existing budgets. When selling to large enterprises that go through annual planning processes, fitting into an existing budget means sales cycles of 3-6 months instead of 9-18 months. Yes, competition eventually tried to corner Snowflake as "just a cute data warehouse," but by then they had captured significant market share and ...
    Más Menos
    27 m