Episodios

  • Building a successful infra product between all the AI apps and model providers (chat with Louis from OpenRouter)
    Mar 9 2026

    Tim (Essence VC) and Ian (Keycard) interviewed Louis Vichy, co-founder of OpenRouter, about why he built OpenRouter to de-risk AI app development (end-user pays LLM costs), how it scaled to processing ~5–6T tokens/week, and what OpenRouter is today: a reliable inference routing/control layer across ~60 providers with consolidated billing and reduced vendor lock-in. Louis explains why teams adopt OpenRouter (constant new model integrations, pricing/billing, differing API shapes), how routing focuses on practical heuristics (fallbacks, cost, throughput, latency), and how reliability is achieved via provider failover (e.g., alternate endpoints like Vertex/Bedrock). They discuss agent trends (longer-running agents, small models for routing/classification with specialized downstream models), possible memory support, developer conveniences (e.g., PDF parsing), and enterprise features (security/compliance guardrails, presets). The episode ends with links to OpenRouter chat/rankings pages and hiring for high-agency TypeScript-focused engineers.00:00 Welcome & Meet Louis (OpenRouter Co‑Founder)00:27 Origin Story: De‑Risking AI App Costs (Hackathon Lessons)01:35 First Big Feature: End‑User Pays for Tokens (Sign in with OpenRouter)02:34 From Routing to Rankings: Scaling to Trillions of Tokens03:42 What OpenRouter Is Today: Reliable Inference Across 60+ Providers05:55 Why Teams Adopt It: Avoiding Model API Churn, Billing, and Vendor Lock‑In08:37 Winning Strategy: Don’t Build a “Magic Router”—Optimize Cost/Latency/Throughput18:58 From Chat to RAG + Memory: Building Persistent Agent Context20:37 Developer Bells & Whistles: Auto PDF Parsing and More21:11 Enterprise Readiness: Compliance, Security Guardrails & Model Presets22:22 Customer Growth at Warp Speed in the AI Era23:03 Spicy Future!

    Más Menos
    34 m
  • From 30 Seconds to 20ms: Solving Browser Speed for AI Agents (Chat with Catherine from Kernel)
    Feb 23 2026

    In this episode of The Infra Pod, hosts Tim Chen (Essence VC) and Ian Livingstone (Keycard) sat down with Catherine Jue, co-founder and CEO of Kernel, to explore the cutting-edge world of browser infrastructure for AI agents.


    Catherine shares her journey from Cash App to founding Kernel, explaining how she discovered the critical need for scalable browser automation when AI agents need to interact with the web. The conversation dives deep into the technical innovations behind Kernel's use of unikernels and micro VMs, which enable blazingly fast browser startup times (20ms vs 30+ seconds) and unique snapshot/restore capabilities.

    Catherine discusses the evolution from deterministic browser automation to truly agentic behavior, the challenges of optimizing for variable web workloads, and her optimistic vision for an AI-powered future where the pie expands rather than consolidates. This episode is packed with technical insights about infrastructure, agent tooling, and the future of how software interfaces will evolve in an agent-native world.




    0:24 Catherine's startup journey and founding Kernel
    1:30 Cash App's OpenAI experiment sparks the idea
    3:56 Why browser infrastructure for AI agents?
    6:36 Unikernels: 20ms startup vs 30+ seconds
    15:02 Optimizing for variable web workloads
    23:25 Future of agent-native software
    32:05 Hot takes!

    Más Menos
    41 m
  • Coding agents need infra to apply code changes! (Chat with Tejas from Morph)
    Feb 9 2026

    Tim (Essence VC) and Ian (Keycard) sat down with Tejas Bhakta (CEO of Morph) to chat about building infrastructure for the fastest file edit APIs for coding agents. He shares how Morph delivers 10,000 tokens/second through speculative decoding, why cursor removed fast apply, and his vision for autonomous software that updates without prompts. The conversation covers subagent architecture, code search optimization, and the path to reliable AI coding at scale.

    Timestamps:

    0:00 - Introduction
    0:29 - Why start Morph and pivoting through YC
    1:23 - The fast apply insight from Cursor
    3:42 - How fast apply works and speculative decoding
    6:09 - Use cases: when and where fast apply matters
    8:19 - Why Cursor removed fast apply
    9:22 - Morph's value prop beyond speed
    11:58 - Subagent architecture and SDK approach
    14:45 - Semantic search and code-specific tooling
    19:52 - Building custom coding agents vs platforms
    22:42 - Adoption inhibitors and the future of codegen
    23:26 - Spicy take: Autonomous software and reliability

    Más Menos
    30 m
  • Let's chat about vibe coding & Ralph! (Chat with Dexter at Humanlayer)
    Jan 26 2026

    In this episode of The Infra Pod, hosts Tim and Ian sit down with Dexter Horthy, CEO of Human Layer, to explore the evolution of AI coding agents and the future of software development. Dexter shares his journey from building data tools to discovering the real problem: making AI coding agents actually productive for senior engineers, not just juniors.

    The conversation dives deep into the research-plan-implement workflow that enables engineers to ship 99% of their code with AI assistance, the challenges of getting staff engineers to adopt AI tools, and why most AI coding ecosystems don't actually help you sell to enterprises. Dexter also shares his spicy take on how Ralph-style agents can be even further enhanced.

    Whether you're a skeptical senior engineer or an AI-curious developer, this episode offers practical insights into what actually works in production AI coding today.


    [0:00] Introduction & Dexter's Journey
    Why Dexter finally started a company, the failed data catalog pivot, and building an AI janitor for data warehouses

    [8:00] The Hard Lessons of AI Ecosystem Hype
    Why there's no "SAML for AI agents" and what enterprises actually need versus what the hype machine promises

    [13:00] The Research-Plan-Implement Breakthrough
    How to make senior engineers productive with AI, staying objective during research, and making decisions at the top of the context window

    [26:00] The Vibe Shift & Where We Are Today
    When respected engineers started believing, the role of Ralph and spec-driven development, and what's working in production

    [37:00] Spicy Take: Ralph Goes to the Supreme


    Más Menos
    43 m
  • Building a bug-free vibe coding world (Chat with Akshay from Antithesis)
    Jan 12 2026

    In this episode of the Infra Pod, hosts Ian Livingston (Keycard) and Tim Chen (Essence VC) interviewed the Field CTO Akshay Shah of Antithesis, diving deep into the world of distributed systems, reliability, and the future of software testing. The conversation covers the challenges of building bug-free distributed systems, the story behind Antithesis, lessons from major outages, and the evolving landscape of infrastructure and AI-driven operations.


    Timeline with Timestamps:

    • 00:00 – Introduction & guest background
    • 02:00 – What Antithesis does and why it matters
    • 06:00 – Real-world impact: Testing distributed systems (etcd, Kubernetes)
    • 09:00 – Major outages & lessons learned (AWS, Knight Capital)
    • 12:00 – The origins and philosophy behind Antithesis
    • 16:00 – The future of reliability, testing, and AI in infrastructure
    • 28:00 – Closing thoughts & where to learn more


    Links:

    • Learn more about Antithesis: https://antithesis.com
    • Antithesis on YouTube: @AntithesisHQ
    Más Menos
    47 m
  • Infra Pod 2025: Our Favorite Moments, Hottest Takes, and What’s Next
    Dec 29 2025

    Join Tim from Essence VC and Ian Livingston from Keycard for the year-end 2025 recap of Infra Pod! In this special episode, Tim and Ian reflect on their favorite moments, hottest takes, and biggest lessons from a year of rapid change in infrastructure, AI, and agent technology.

    They revisit standout episodes—like deep dives into browser automation, the evolving role of memory in LLMs, and the disruptive potential of agent sandboxes. The hosts discuss how companies are pivoting in the AI era, the importance of adapting quickly, and the surprising ways hardware choices are shaping the future of compute.

    Looking ahead, Tim and Ian share bold predictions for 2026, debate the next big abstractions in infrastructure, and invite listeners to share their own hot takes and favorite episodes. Whether you’re an engineer, founder, or just passionate about the future of tech, this episode is packed with insights, energy, and a look at what’s next for the Infra Pod community.


    Más Menos
    23 m
  • From Spark to Eventual: Reinventing Data for the AI Era (Chat with Sammy from Eventual)
    Dec 15 2025

    In this episode of The Infra Pod, hosts Tim from Essence VC and co-host Ian Livingston (Keycard) interviewed Sammy Sdu, CEO of Eventual, a multimodal data processing platform. Sammy shares his journey from AI research and self-driving cars to founding Eventual, discusses the challenges of processing unstructured and multimodal data, and explores the future of data engineering, scalability, and the role of agents in modern data pipelines.


    Timestamps:

    • 02:47 — Data processing challenges & founding Eventual
    • 09:40 — Real-world use cases & business impact
    • 24:20 — The future of data engineering & tools
    • 40:00 — Closing thoughts & where to learn more
    Más Menos
    41 m
  • Render is defining what taste means in backend infra (Chat with Anurag from Render)
    Dec 1 2025

    In this episode of The Infra Pod, hosts Tim and Ian are joined by Anurag, CEO of Render, to discuss the journey of building a modern cloud platform from scratch. The conversation covers Anurag’s background at Stripe, the challenges of cloud infrastructure, the evolution of developer tools, the importance of abstraction and taste in product design, and the future of agent-driven development. The episode is packed with insights on scaling platforms, developer experience, and the shifting landscape of cloud computing.

    Más Menos
    42 m