Episodios

  • #67 Why the SRE Book Fails Most Orgs — Lessons from a Google Veteran
    Jul 15 2025

    A new or growing SRE team. A copy of the book. A company that says it cares about reliability. What happens next? Usually… not much.

    In this episode, I sit down with Dave O’Connor, a 16-year Google SRE veteran, to talk about what happens when organizations cargo-cult reliability practices without understanding the context they were born in.

    You might know him for his self-deprecating wit and legendary USENIX blurb about being “complicit in the development of the SRE function.”

    This one’s a treat — less “here’s a shiny new tool” and more “here’s what reliability actually looks like when you’ve seen it all.”

    No vendor plugs from Dave at all, just a good old-fashioned chat about what works and what doesn’t.

    Here’s what we dive into:

    * The adoption trap: Why SRE efforts often fail before they begin—especially when new hires care more about reliability than the org ever intended.

    * The SRE book dilemma: Dave’s take on why following the SRE book chapter-by-chapter is a trap for most companies (and what to do instead).

    * The cost of “caring too much”: How engineers burn out trying to force reliability into places it was never funded to live.

    * You build it, you run it (but should you?): Not everyone’s cut out for incident command—and why pretending otherwise sets teams up to fail.

    * Buying vs. building: The real reason even conservative enterprises are turning into software shops — and the reliability nightmare that follows.

    We also discuss the evolving role of reliability in organizations today, from being mistaken for “just ops” to becoming a strategic investment (when done right).

    Dave's seen the waves come and go in SRE — and he's still optimistic. That alone is worth a listen.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    31 m
  • #66 - Unpacking 2025 SRE Report’s Damning Findings
    Jul 1 2025

    I know it’s already six months into 2025, but we recorded this almost three months ago. I’ve been busy with my foray into the world of tech consulting and training —and, well, editing these podcast episodes takes time and care.

    This episode was prompted by the 2025 Catchpoint SRE Report, which dropped some damning but all-too-familiar findings:

    * 53% of orgs still define reliability as uptime only, ignoring degraded experience and hidden toil

    * Manual effort is creeping back in, reversing five years of automation gains

    * 41% of engineers feel pressure to ship fast, even when it undermines long-term stability

    To unpack what this actually means inside organizations, I sat down with Sebastian Vietz, Director of Reliability Engineering at Compass Digital and co-host of the Reliability Enablers podcast.

    Sebastian doesn’t just talk about technical fixes — he focuses on the organizational frictions that stall change, burn out engineers, and leave “reliability” as a slide deck instead of a lived practice.

    We dig into:

    * How SREs get stuck as messengers of inconvenient truths

    * What it really takes to move from advocacy to adoption — without turning your whole org into a cost center

    * Why tech is more like milk than wine (Sebastian explains)

    * And how SREs can strengthen—not compete with—security, risk, and compliance teams

    This one’s for anyone tired of reliability theatrics. No kumbaya around K8s here. Just an exploration of the messy, human work behind making systems and teams more resilient.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    30 m
  • #65 - In Critical Systems, 99.9% Isn’t Reliable — It’s a Liability
    Jun 17 2025

    Most teams talk about reliability with a margin for error. “What’s our SLO? What’s our budget for failure?”

    But in the energy sector? There is no acceptable downtime. Not even a little.

    In this episode, I talk with Wade Harris, Director of FAST Engineering in Australia, who’s spent 15+ years designing and rolling out monitoring and control systems for critical energy infrastructure like power stations, solar farms, SCADA networks, you name it.

    What makes this episode different is that Wade isn’t a reliability engineer by title, but it’s baked into everything his team touches. And that matters more than ever as software creeps deeper into operational technology (OT), and the cloud tries to stake its claim in critical systems.

    We cover:

    * Why 100% uptime is the minimum bar, not a stretch goal

    * How the rise of renewables has increased system complexity — and what that means for monitoring

    * Why bespoke integration and SCADA spaghetti are still normal (and here to stay)

    * The reality of cloud risk in critical infrastructure (“the cloud is just someone else’s computer”)

    * What software engineers need to understand if they want their products used in serious environments

    This isn’t about observability dashboards or DevOps rituals. This is reliability when the lights go out and people risk getting hurt if you get it wrong.

    And it’s a reminder: not every system lives in a feature-driven world. Some systems just have to work. Always. No matter what.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    28 m
  • #64 - Using AI to Reduce Observability Costs
    Jan 28 2025

    Exploring how to manage observability tool sprawl, reduce costs, and leverage AI to make smarter, data-driven decisions.

    It's been a hot minute since the last episode of the Reliability Enablers podcast.

    Sebastian and I have been working on a few things in our realms. On a personal and work front, I’ve been to over 25 cities in the last 3 months and need a breather.

    Meanwhile, listen to this interesting vendor, Ruchir Jha from Cardinal, working on the cutting edge of o11y to help reduce costs from spiraling out of control.

    (To the skeptics, he did not pay me for this episode)

    Here’s an AI-generated summary of what you can expect in our conversation:

    In this conversation, we explore cutting-edge approaches to FinOps i.e. cost optimization for observability.

    You'll hear about three pressing topics:

    * Managing Tool Sprawl: Insights into the common challenge of juggling 5-15 tools and how to identify which ones deliver real value.

    * Reducing Observability Costs: Techniques to track and trim waste, including how to uncover cost hotspots like overused or redundant metrics.

    * AI for Observability Decisions: Practical ways AI can simplify complex data, empowering non-technical stakeholders to make informed decisions.

    We also touch on the balance between open-source solutions like OpenTelemetry and commercial observability tools.

    Learn how these strategies, informed by Ruchir's experience at Netflix, can help streamline observability operations and cut costs without sacrificing reliability.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    21 m
  • #63 - Does "Big Observability" Neglect Mobile?
    Nov 12 2024

    Andrew Tunall is a product engineering leader focused on pushing the boundaries of reliability with a current focus on mobile observability. Using his experience from AWS and New Relic, he’s vocal about the need for a more user-focused observability, especially in mobile, where traditional practices fall short.

    * Career Journey and Current Role: Andrew Tunall, now at Embrace, a mobile observability startup in Portland, Oregon, started his journey at AWS before moving to New Relic. He shifted to a smaller, Series B company to learn beyond what corporate America offered.

    * Specialization in Mobile Observability: At Embrace, Andrew and his colleagues build tools for consumer mobile apps, helping engineers, SREs, and DevOps teams integrate observability directly into their workflows.

    * Gap in Mobile Observability: Observability for mobile apps is still developing, with early tools like Crashlytics only covering basic crash reporting. Andrew highlights that more nuanced data on app performance, crucial to user experience, is often missed.

    * Motivation for User-Centric Tools: Leaving “big observability” to focus on mobile, Andrew prioritizes tools that directly enhance user experience rather than backend metrics, aiming to be closer to end-users.

    * Mobile's Role as a Brand Touchpoint: He emphasizes that for many brands, the primary consumer interaction happens on mobile. Observability needs to account for this by focusing on user experience in the app, not just backend performance.

    * Challenges in Measuring Mobile Reliability: Traditional observability emphasizes backend uptime, but Andrew sees a gap in capturing issues that affect user experience on mobile, underscoring the need for end-to-end observability.

    * Observability Over-Focused on Backend Systems: Andrew points out that “big observability” has largely catered to backend engineers due to the immense complexity of backend systems with microservices and Kubernetes. Despite mobile being a primary interface for apps like Facebook and Instagram, observability tools for mobile lag behind backend-focused solutions.

    * Lack of Mobile Engineering Leadership in Observability: Reflecting on a former Meta product manager’s observations, Andrew highlights the lack of VPs from mobile backgrounds, which has left a gap in observability practices for mobile-specific challenges. This gap stems partly from frontend engineers often seeing themselves as creators rather than operators, unlike backend teams.

    * OpenTelemetry’s Limitations in Mobile: While OpenTelemetry provides basic instrumentation, it falls short in mobile due to limited SDK support for languages like Kotlin and frameworks like Unity, React Native, and Flutter. Andrew emphasizes the challenges of adapting OpenTelemetry to mobile, where app-specific factors like memory consumption don’t align with traditional time-based observability.

    * SREs as Connective Tissue: Andrew views Site Reliability Engineers (SREs) as essential in bridging backend observability practices with frontend user experience needs. Whether through service level objectives (SLOs) or similar metrics, SREs help ensure that backend metrics translate into positive end-user experiences—a critical factor in retaining app users.

    * Amazon’s Operational Readiness Review: Drawing from his experience at AWS, Andrew values Amazon’s practice of operational readiness reviews before launching new services. These reviews encourage teams to anticipate possible failures or user experience issues, weighing risks carefully to maintain reliability while allowing innovation.

    * Shifting Focus to “Answerability” in Observability: For Andrew, the goal of observability should evolve toward “answerability,” where systems provide engineers with actionable answers rather than mere data. He envisions a future where automation or AI could handle repetitive tasks, allowing engineers to focus on enhancing user experiences instead of troubleshooting.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    29 m
  • #62 - Early Youtube SRE shares Modern Reliability Strategy
    Nov 5 2024
    Andrew Fong’s take on engineering cuts through the usual role labels, urging teams to start with the problem they’re solving instead of locking into rigid job titles. He sees reliability, inclusivity, and efficiency as the real drivers of good engineering. In his view, SRE is all about keeping systems reliable and healthy, while platform engineering is geared toward speed, developer enablement, and keeping costs in check. It’s a values-first, practical approach to tackling tough challenges that engineers face every day.Here’s a slightly deeper dive into the concepts we discussed:* Career and Evolution in Tech: Andrew shares his journey through various roles, from early SRE at Youtube to VP of Infrastructure at Dropbox to Director of Engineering at Databricks, with extensive experience in infrastructure through three distinct eras of the internet. He emphasized the transition from early infrastructure roles into specialized SRE functions, noting the rise of SRE as a formalized role and the evolution of responsibilities within it.* Building Prodvana and the Future of SRE: As CEO of startup, Prodvana, they're focused on an "intelligent delivery system" designed to simplify production management for engineers, addressing cognitive overload. They highlight SRE as a field facing new demands due to AI, discussing insights shared with Niall Murphy and Corey Bertram around AI's potential in the space, distinguishing it from "web three" hype, and affirming that while AI will transform SRE, it will not eliminate it.* Challenges of Migration and Integration: Reflecting on experiences at YouTube post-acquisition by Google, the speaker discusses the challenges of migrating YouTube’s infrastructure onto Google’s proprietary, non-thread-safe systems. This required extensive adaptation and “glue code,” offering insights into the intricacies and sometimes rigid culture of Google’s engineering approach at that time.* SRE’s Shift Toward Reliability as a Core Feature: The speaker describes how SRE has shifted from system-level automation to application reliability, with growing recognition that reliability is a user-facing feature. They emphasize that leadership buy-in and cultural support are essential for organizations to evolve beyond reactive incident response to proactive, reliability-focused SRE practices.* Organizational Culture and Leadership Influence: Leadership’s role in SRE success is highlighted as crucial, with examples from Dropbox and Google emphasizing that strong, supportive leadership can shape positive, reliability-centered cultures. The speaker advises engineers to gauge leadership attitudes towards SRE during job interviews to find environments where reliability is valued over mere incident response.* Outcome-Focused Work Over Titles: Emphasis on assembling the right team based on skills, not titles, to solve technical problems effectively. Titles often distract from focusing on outcomes, and fostering a problem-solving culture over role-based thinking accelerates teamwork and results.* Engineers as Problem Solvers: Engineers, especially natural ones, generally resist job boundaries and focus on solving problems rather than sticking rigidly to job descriptions. This echoes how iconic engineers like Steve Jobs valued versatility over predefined roles.* Culture as Core Values: Organizational culture should be driven by core values like reliability, efficiency, and inclusivity rather than rigid processes or roles. For instance, Dropbox's infrastructure culture emphasized being a “force multiplier” to sustain product velocity, an approach that ensured values were integrated into every decision.* Balancing SRE and Platform Priorities: The fundamental difference between SRE (Site Reliability Engineering) and platform engineering is their focus: SRE prioritizes reliability, while platform engineering is geared toward increasing velocity or reducing costs. Leaders must be cautious when assigning both roles simultaneously, as each requires a distinct focus and expertise.* Strategic Trade-Offs in Smaller Orgs: In smaller companies with limited resources, leaders often face challenges balancing cost, reliability, and other objectives within single roles. It's advised to sequence these priorities rather than burden one individual with conflicting objectives. Prioritizing platform stability, for example, can help improve reliability in the long term.* DevOps as a Philosophy: DevOps is viewed here as an operational philosophy rather than a separate role. The approach enhances both reliability and platform functions by fostering a collaborative, efficient work culture.* Focus Investments for Long-Term Gains: Strategic technology investments, even if they might temporarily hinder short-term metrics (like reliability), can drive long-term efficiency and reliability improvements. For instance, Dropbox invested in a shared metadata system to enable active-active disaster recovery, viewing this ...
    Más Menos
    36 m
  • #61 Scott Moore on SRE, Performance Engineering, and More
    Oct 22 2024



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    38 m
  • #60 How to NOT fail in Platform Engineering
    Oct 1 2024

    Here’s what we covered:

    Defining Platform Engineering

    * Platform engineering: Building compelling internal products to help teams reuse capabilities with less coordination.

    * Cloud computing connection: Enterprises can now compose platforms from cloud services, creating mature, internal products for all engineering personas.

    Ankit’s career journey

    * Didn't choose platform engineering; it found him.

    * Early start in programming (since age 11).

    * Transitioned from a product engineer mindset to building internal tools and platforms.

    * Key experience across startups, the public sector, unicorn companies, and private cloud projects.

    Singapore Public Sector Experience

    * Public sector: Highly advanced digital services (e.g., identity services for tax, housing).

    * Exciting environment: Software development in Singapore’s public sector is fast-paced and digitally progressive.

    Platform Engineering Turf Wars

    * Turf wars: Debate among DevOps, SRE, and platform engineering.

    * DevOps: Collaboration between dev and ops to think systemically.

    * SRE: Operations done the software engineering way.

    * Platform engineering: Delivering operational services as internal, self-service products.

    Dysfunctional Team Interactions

    * Issue: Requiring tickets to get work done creates bottlenecks.

    * Ideal state: Teams should be able to work autonomously without raising tickets.

    * Spectrum of dysfunction: From one ticket for one service to multiple tickets across teams leading to delays and misconfigurations.

    Quadrant Model (Autonomy vs. Cognitive Load)

    * Challenge: Balancing user autonomy with managing cognitive load.

    * Goal: Enable product teams with autonomy while managing cognitive load.

    * Solution: Platforms should abstract unnecessary complexity while still giving teams the autonomy to operate independently.

    How it pans out

    * Low autonomy, low cognitive load: Dependent on platform teams but a simple process.

    * Low autonomy, high cognitive load: Requires interacting with multiple teams and understanding technical details (worst case).

    * High autonomy, high cognitive load: Teams have full access (e.g., AWS accounts) but face infrastructure burden and fragmentation.

    * High autonomy, low cognitive load: Ideal situation—teams get what they need quickly without detailed knowledge.

    Shift from Product Thinking to Cognitive Load

    * Cognitive load focus: More important than just product thinking—consider the human experience when using the system.

    * Team Topologies: Mentioned as a key reference on this concept of cognitive load management.

    Platform as a Product Mindset

    * Collaboration: Building the platform in close collaboration with initial users (pilot teams) is crucial for success.

    * Product Management: Essential to have a product manager or team dedicated to communication, user journeys, and internal marketing.

    Self-Service as a Platform Requirement

    * Definition: Users should easily discover, understand, and use platform capabilities without human intervention.

    * User Testing: Watch how users interact with the platform to understand stumbling points and improve the self-service experience.

    Platform Team Cognitive Load

    * Burnout Prevention: Platform engineers need low cognitive load as well. Moving from a reactive (ticket-based) model to a proactive, self-service approach can reduce the strain.

    * Proactive Approach: Self-service models allow platform teams to prioritize development and avoid being overwhelmed by constant requests.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit read.srepath.com
    Más Menos
    31 m