Deployed: The AI Product Podcast Podcast Por Freeplay arte de portada

Deployed: The AI Product Podcast

Deployed: The AI Product Podcast

De: Freeplay
Escúchala gratis

Deployed is the podcast for people building AI products.


With all the hype about AI over the past two years, it’s often been hard to discern what’s actually working. We started Deployed to share the real-world stories of the leaders, engineers, product & design teams, and data teams who are building and running great generative AI products for their customers.


In each episode we’ll dig into the journey to create these products, the impact they’re making for customers and the bottom line, and what it takes to make generative AI products successful. Our hope is to add a bit of signal in all the noise, and help you stay ahead of the curve when it comes to strategies and tactics that actually work in production.


We’d love to hear from you, please reach out to us at team@freeplay.ai.


You can also learn more about what we’re building at Freeplay here: freeplay.ai

© 2026 Deployed: The AI Product Podcast by Freeplay
Episodios
  • Building a "Luxury" Software Product in the AI Era: Loïc Houssier, CTO at Superhuman Mail
    Mar 12 2026

    Superhuman charges $30/month for email when Gmail is free. That's always forced them to maintain a different quality bar from most products, and it shapes everything about how they build AI features too.

    Loïc Houssier is CTO at Superhuman Mail, and one of the most fun and energized engineering leaders I've gotten to work with. In this conversation, he walks us through what quality really means when you're building a "luxury" software product - and how that mindset applies to AI.

    We dig into the high-dimensional challenge of building great AI experiences around email, from auto-drafts to semantic search. We talk about how they approach evals when every user's inbox looks completely different, starting from the hardest queries they can find internally. And we get into how Superhuman is adopting coding agents across their engineering team - including their "quality week" practice and why they removed all procurement blockers for AI tools.

    Timestamps:

    0:00 - Intro: What "luxury" means for a software product

    2:12 - Loïc's background and why he joined Superhuman

    6:02 - Game design principles in product development (not gamification)

    10:14 - AI features at Superhuman: triage, search, and auto-drafts

    15:34 - The challenge of building AI features in a high-dimensional space

    20:00 - Building evals from the hardest internal queries (the "wood for my coffee table" example)

    24:20 - Privacy and how they handle eval data

    25:45 - Their mix of models: BERT, fine-tuned, open source, and frontier

    28:19 - What quality means when your competition is free

    30:10 - Quality week: dedicating the first week of every quarter to bugs and AI workflow improvements

    32:44 - How they're adopting coding agents internally

    35:30 - Removing all the blockers for AI tools (e.g. 24-hour security approval, unlimited budgets)

    38:06 - How Loïc ramped up on AI as a leader

    40:15 - Parting advice: choose your vendors wisely, and enjoy this moment as a builder

    Links:

    • Superhuman: https://superhuman.com
    • Find Loïc on LinkedIn: https://www.linkedin.com/in/loichoussier/
    Más Menos
    44 m
  • Getting Things Done with Zoom Agents: Lijuan Qin, Head of Product for Zoom AI
    Feb 23 2026

    On this episode of Deployed we talk with Lijuan Qin, Head of Product for Zoom AI, about how her team is moving beyond AI meeting transcriptions and note-taking to a mission of agents helping from "conversation to completion." That's how Lijuan describes her vision of the future for Zoom AI, where AI doesn't just summarize your meetings, it actually follows through on the work that comes after.

    Lijuan has a PhD in AI and spent 20 years at Microsoft working on NLP and video understanding before joining Zoom. She brings a long-arc perspective on what's changed and what hasn't in AI, and shares how her team thinks about building an AI companion that acts more like a team member than a search engine.

    Key insights for builders include:

    * Why high engagement with an AI product can be a negative signal. i.e. if users keep going back and forth with your AI, the product might be failing them ("you got it wrong! try again").
    * How Zoom measures AI quality by *output* value and task completion, instead of usage metrics or individual response accuracy
    * Their "AI-first, intent-driven" approach: starting from what the user needs to get done, not which tool to use
    * How they personalize AI features in stages: role-based outputs first, then memory, then live conversation context, rather than trying to build something like a full "digital twin" on day one
    * A concrete example: Drafting different kickoff documents for each meeting attendee that is personalized for the priorities of their role (PM vs. engineer vs. CTO)
    * Why transparent decision frameworks let big organizations experiment fast without approval loops
    * How the Zoom AI team balances speed and enterprise trust

    Links from our conversation:

    * Zoom AI Companion: https://ai.zoom.us
    * Find Lijuan on LinkedIn: https://www.linkedin.com/in/lijuanqin/
    * Freeplay (that's us): https://freeplay.ai

    Más Menos
    38 m
  • What It Takes to Run Agents on Billions of Messages: Kevin Stanton, Sprout Social
    Feb 6 2026

    Kevin Stanton has spent 13 years at Sprout Social, most recently running infrastructure for a platform that processes billions of social posts. When generative AI emerged, their team saw an opportunity to solve one of their hardest problems: helping customers make sense of massive amounts of unstructured social data.

    Now Kevin is building Trellis, Sprout's AI agent for social listening and competitive intelligence. In this conversation, he shares what it's looked like to shift an engineering team toward building agents — and the practical lessons they've learned shipping to thousands of customers.

    We cover details like why MCP felt more natural than RAG for their architecture, how they use chat as a strategy for seeding eval datasets, when to let agents reason versus when to collapse tools and write deterministic code, and why they pulled evals out of CI/CD after learning the hard way how non-deterministic tests can break things.

    Links from our conversation:

    • Sprout Social: https://sproutsocial.com
    • Sprout Social Insights Blog: https://sproutsocial.com/insights
    • Trellis: https://sproutsocial.com/insights/press/sprout-social-unveils-trellis-its-ai-agent-that-turns-social-data-into-instant-enterprise-intelligence/
    • Find Kevin on LinkedIn: https://www.linkedin.com/in/kevinstanton/
    Más Menos
    48 m
Todavía no hay opiniones