Episodios

  • From Roadmaps to R&D: How AI Is Changing Product Development - with Richard White, Founder of Fathom AI
    Feb 18 2026

    Fathom was built on the assumption that transcription would become commoditized and generative models would steadily improve. Rather than training proprietary models, Richard focused on building the infrastructure around them and waiting for model capabilities to reach the right threshold.

    In this conversation, he explains why AI has made effort and impact harder to predict, and why that shifts product development from roadmap execution toward experimentation. He describes separating an exploratory AI team from core engineering, structuring that team to prototype and write specs, and expecting a meaningful portion of experiments not to work.
    Richard introduces his Jenga model for AI development, testing different models and use cases to find where resistance is lowest. He also discusses the operational realities of rapid model updates, hallucination rates, and what he calls the LLM treadmill.

    The discussion explores qualitative QA, organizational design, buy versus build decisions, and why leadership taste plays an increasingly important role as AI lowers the barrier to generating outputs.

    Key takeaways:

    • Estimating effort and impact is becoming harder
      As model capabilities improve quickly, features that require months today may take far less time in the near future. This makes traditional planning assumptions less stable.
    • Product development increasingly resembles R&D
      With shifting capabilities and uncertain outcomes, teams must experiment, prototype, and iterate rather than rely solely on long term roadmaps.
    • Organizational structure must reflect experimentation
      Separating exploratory AI work from core engineering can allow faster iteration while maintaining stability elsewhere.
    • Rapid model updates create operational pressure
      Frequent improvements and changing performance levels can require teams to revisit and adjust features more often than in traditional software cycles.
    • Qualitative judgment plays a larger role
      As AI lowers the cost of generating outputs, evaluating quality and deciding what to ship becomes increasingly important.

    Fathom: fathom.ai
    Fathom LinkedIn: linkedin/company/fathom-video/
    Richard's LinkedIn: linkedin/in/rrwhite/

    00:00 Intro: Why AI Breaks Roadmaps
    00:19 Meet Richard White (Fathom AI)
    02:16 From Roadmaps to R&D
    04:49 Designing AI Teams for Speed
    07:11 The Jenga Model
    09:56 Failing 50% & AI Team Psychology
    13:40 LLMs as Interns & Anti-Planning
    21:01 QA, Data Pain & Developing Taste
    24:59 Executive Taste & Culture Rules
    27:20 Reacting to AI Waves
    28:50 Fathom’s 4-Step Product Plan
    30:47 What New Models Unlock
    32:13 From Scribe to Second Brain
    40:32 Build vs Buy in AI
    45:32 The Debrief

    📜 Read the transcript for this episode: from-roadmaps-to-rd-how-ai-is-changing-product-development-with-richard-white-founder-of-fathom-ai/transcript

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    57 m
  • Here’s How to Know If You’re Getting the Most Out of AI – with Bryan McCann, CTO of You.com
    Feb 4 2026

    In this episode, Bryan McCann joins Henrik and Jeremy to explore how search is evolving from simple queries into more conversational and agent-driven systems, and why prompting is likely a temporary skill. Bryan shares how his definition of productivity changed as an AI researcher, moving away from doing the work himself and toward designing plans and experiments that machines could run continuously.

    The conversation expands to leadership and organizational design. Bryan explains why helping others learn how to work with AI became his highest-leverage activity, and offers a simple rule of thumb: try to get AI to do the task first, and treat anything it can’t do as an interesting research problem. Henrik and Jeremy connect this to Bryan’s view that organizations may increasingly resemble neural networks, with information flowing more freely and decisions less tied to rigid hierarchies.

    Key Takeaways:

    • Productivity can be measured by machine output, not human effort
      Bryan explains how “keeping the GPUs full” became his primary measure of productivity.
    • Prompting is useful, but likely temporary
      The episode discusses why future systems may rely less on explicit prompts and more on inferred context.
    • Try AI first, then learn from what it can’t do
      Tasks AI struggles with can reveal meaningful research opportunities.
    • Leadership is about scaling others
      Bryan shares how his focus shifted from scaling himself to helping his team increase impact.
    • Organizations may benefit from neural-network-like design
      Better information flow and fewer bottlenecks can improve decision-making.

    YOU: You.com
    Bryan's website: bryanmccann.org
    LinkedIn: linkedin/company/youdotcom/

    00:00 Intro: Keeping the GPUs Full
    00:22 Meet Bryan McCann: CTO & co-founder of You.com
    00:43 Why Search Is Breaking - and Why It Becomes a Skill
    01:41 From Search to Agents
    03:18 The Case for Proactive, Context-Aware AI
    04:30 We Don’t Need New Hardware - We Need Trust
    05:43 The Trust Problem of Always-On Listening
    07:57 Trust as the Real Bottleneck (Not AI Capability)
    09:52 Delivering Immediate Value to Earn Trust
    12:13 Business Models and Escaping the Attention Economy
    17:27 What “Agents” Really Mean - and Why the Term Will Fade
    20:37 Productivity, Parkinson’s Law, and Keeping the Machines Running
    23:52 Scaling Yourself vs. Scaling Your Team
    29:57 Building Culture: Automate, Throw Away, Rebuild
    35:46 Designing Organizations Like Neural Networks
    45:02 Recruiting for Initiative in an AI-Native Organization
    49:18 The debrief

    📜 Read the transcript for this episode: podcast.beyondtheprompt.ai/heres-how-to-know-if-youre-getting-the-most-out-of-ai-with-bryan-mccann-cto-of-youcom/transcript

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    1 h
  • Building An Enterprise AI Innovation Lab: A Master Class with Humza Teherany, Chief Strategy Officer of Maple Leaf Sports and Entertainment
    Jan 21 2026

    In this episode, Humza Teherany breaks down how he bridges deep technical fluency with strategic leadership at MLSE, home to the Raptors, Maple Leafs, and more. He shares how a vacation turned into an AI reawakening and how that hands-on immersion led to a fundamental shift in how his organization builds and experiments.

    Humza walks through MLSE’s build in a day practice, their internal AI platform, and why speed to prototype now unlocks more than just efficiency. It changes who gets to shape the future. He, Jeremy, and Henrik explore the limits of traditional enterprise AI rollouts and how to build spaces for superusers that enable company-wide transformation. The conversation covers how technical literacy impacts credibility, why idea execution is the new differentiator, and how Humza’s five-year-old inspired a bedtime story app powered by AI.

    Whether you're a CTO, a founder, or just figuring out where to start, Humza makes a compelling case. The best leaders don’t delegate this moment. They build.

    Key Takeaways

    • Leaders should not delegate the AI moment
      Humza, Henrik, and Jeremy agree that this is a moment for leaders to be hands-on. The ones who build and explore the tools themselves are the ones unlocking real impact.
    • Technical fluency builds credibility and better decisions
      Humza’s return to his technical roots has changed how he leads. Understanding how AI works helps leaders earn trust and make smarter, faster choices.
    • Speed enables inclusion
      MLSE’s build in a day model allows more people to contribute ideas and see them turned into real prototypes. Moving fast isn’t just efficient - it changes who gets to participate.
    • Empower your superusers first
      Rather than starting with enterprise-wide training, Humza focuses on enabling the small group already eager to build. That early energy helps drive broader culture change.

    MLSE: mlse.com
    LinkedIn: Humza Teherany - LinkedIn

    00:00 Intro: Humza Teherany and MLSE
    00:27 The Role of C-Suite Leaders in AI
    01:08 Reconnecting with Technical Skills
    02:08 Diving Deep into AI Tools
    03:03 The Importance of Hands-On Learning
    04:25 Progression from Consumer to Technical AI Tools
    07:28 Building a Business Case for AI
    10:03 Creating a Culture of Innovation
    14:00 Implementing AI in Business Operations
    21:05 Challenges and Strategies in AI Adoption
    26:17 Organizational Structure for AI Success
    32:02 The Importance of Reviewing and Planning Code
    33:01 The Future of Solo Developers and New Technologists
    34:58 Reimagining Company Structures with AI
    38:55 Key Skills for Future Technology Leaders
    41:19 Personal AI Experiments and Innovations
    46:52 Encouraging Creativity in Children with AI
    49:11 The Debrief

    📜 Read the transcript for this episode: building-an-enterprise-ai-innovation-lab-a-master-class-with-humza-teherany-chief-strategy-officer-of-maple-leaf-sports-and-entertainment/transcript

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    58 m
  • Why AI Gets People Wrong: The Real Source of Insight with Anthropologist Mikkel B. Rasmussen
    Jan 6 2026

    Mikkel B. Rasmussen brings a rare lens to the AI conversation. As an applied anthropologist, he has spent decades helping companies like LEGO uncover what is really going on beneath the surface.

    In this episode, he shares how deep insight often begins with being wrong, why surprise is the clearest sign you have found something meaningful, and how the pain of not knowing is essential to breakthrough thinking. He also explains how AI is transforming his own research, from pattern recognition to video ethnography, and introduces a provocative idea: Anthropology Without Anthropologists.

    Jeremy and Henrik reflect on what it means to teach AI how to surprise us, how synthetic data might reshape experimentation, and why better insights begin with better questions.

    Key Takeaways

    • Insight starts with being wrong
      Mikkel defines insight as the gap between how we think the world works and how it actually is. Anthropology helps uncover these mismatches, and that is where real breakthroughs begin.
    • Pain is part of the process
      Mikkel and Jeremy both reflect on the emotional struggle that precedes insight. The doubt, sleepless nights, and questioning whether the work will ever come together is not failure. It is a necessary stage of discovery.
    • Surprise is a signal
      The moment of surprise, when a new pattern emerges or an assumption is shattered, is at the core of applied anthropology. For Mikkel, it is the clearest sign that you have found something real.
    • AI can accelerate experimentation
      Mikkel shares how AI is already helping his team analyze patterns, run faster experiments, and even conduct interviews that outperform humans in some cases. The goal is not to replace people but to push the limits of what is possible.

    HARL: humanactivitylab.com

    00:00 Intro: Why This Conversation Matters
    00:25 Meet Mikkel: Founder of Human Activity Laboratory
    01:14 Understanding Anthropology and AI
    03:32 Applied Anthropology: Tools and Techniques
    04:56 The Role of Narratives in AI
    07:06 The Importance of Sensory and Social Dimensions
    13:06 Case Study: LEGO and the Anthropology of Play
    21:07 The Role of Surprise in Anthropology
    27:51 AI and Human Synergy
    31:26 Exploring AI's Limitations and Potential
    32:46 Anthropology Without Anthropologists
    34:17 AI's Role in Generating Insights
    37:23 Human Bias in AI-Generated Ideas
    42:05 Synthetic Data and Its Applications
    47:34 The Future of AI in Anthropology
    49:25 The Debrief

    📜 Read the transcript for this episode: why-ai-gets-people-wrong-the-real-source-of-insight-with-anthropologist-mikkel-b-rasmussen/transcript

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    56 m
  • How the World’s Leading AI-First Fashion House Flips the Cash Flow Equation - with Diarra Bousso
    Dec 24 2025

    Diarra Bousso returns to Beyond the Prompt to share how she's reprogramming the fashion industry using AI, math, and a relentless spirit of experimentation. From selling AI-generated products before they exist to cutting out waste and wait times, she walks us through a radical new approach to design and operations.

    She explains how her team uses scientific rigor to test marketing ideas, create on-demand collections, and rethink the traditional fashion calendar. Diarra also opens up about the origin of her experimental mindset, which began during a year of recovery after a life-changing accident, and how that philosophy now shapes her leadership.

    The episode wraps with reflections on sustainability, mental health, and what it means to build a joyful, human-first company in the age of AI. Diarra shares how she’s using AI not just to scale her business, but to reclaim her time, and why her next venture might bring these tools to creators everywhere.

    Key Takeaways

    • Experimentation is the foundation
      Diarra treats her entire business as a lab. Every idea is a test, and her team is trained to think in hypotheses, measure results, and adapt quickly.
    • AI enhances human creativity
      She sees AI as a creative partner, not a replacement. It helps her move faster, make smarter decisions, and focus on the parts of design that require real taste and vision.
    • Sell before you build
      By testing AI-generated designs with customers before making anything, Diarra unlocks cash flow, cuts waste, and sidesteps the long timelines of traditional fashion.
    • Sustainability starts with the founder
      Diarra applies the same mindset to her own life. She’s using AI to reclaim time, reduce burnout, and build a business that supports health as well as growth.

    Website: diarrabousso.com
    DIARRABLU: diarrablu.com

    00:00 Intro: AI-Driven Fashion
    00:13 Meet Diarra Bousso: Founder of DIARRABLU
    01:43 The Power of Experimentation
    02:00 A Life-Changing Accident and Recovery
    04:40 Embracing a Culture of Experimentation
    06:13 Scientific Approach to Business
    09:48 Empowering the Team
    15:03 AI in Fashion Design
    18:36 Revolutionizing the Fashion Industry
    28:09 Traditional vs. Digital Fashion Models
    32:18 Embracing AI in Fashion Design
    32:49 Collaborating with Retailers Using AI
    35:06 AI's Role in Prototyping and Design
    36:58 The Future of AI in Creative Industries
    39:14 Navigating Resistance to AI
    48:10 Operationalizing AI for Efficiency
    52:18 Balancing Innovation and Personal Well-being
    57:19 Debrief

    📜 Read the transcript for this episode: Transcript of How The Worlds Leading AI-first Fashion House Flips The Cash Flow Equation with Diarra Bousso

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    1 h y 9 m
  • The Future of AI with Illia Polosukhin: The Man Who Put the T in GPT
    Dec 9 2025

    In this episode, Illia Polosukhin joins Henrik and Jeremy to trace the origins of transformers and how practical constraints inside Google led to a breakthrough that reshaped modern AI. He explains why recurrent models were hitting limits, how parallel attention opened the door to scale, and why he believed a major jump in capability was imminent long before the rest of the world saw it.

    The conversation then turns to the risks and responsibilities of today’s AI systems. Illia describes how models can be subtly guided to influence user opinions, why open weights are not the same as truly open models, and how hidden behaviors can be embedded during training. He explains why provenance and verifiable data pipelines matter, especially as AI begins mediating more of the information we rely on.

    Later in the episode, Illia outlines how blockchain can support trust, identity, and coordination in a future where AI agents act on our behalf. He shares why information is becoming more valuable than money, how ownership of personal AI models will shape user agency, and why domain expertise becomes significantly more powerful when paired with modern generative tools.

    Key Takeaways:

    • Transformers emerged from practical constraints, not theory
      Illia explains that the shift from recurrent networks to attention was driven by speed and parallelization needs at Google, not a desire to invent a new paradigm.
    • AI’s step change was foreseeable to early builders
      Illia expected a ChatGPT level breakthrough several years before it arrived, based on clear research signals and accelerating model performance.
    • Provenance and trust will define the next phase of AI
      As AI systems can be subtly manipulated, Illia argues that verifiable data pipelines and transparent training processes are essential to prevent large scale misinformation.
    • Ownership and identity matter in an agent driven world
      Illia believes individuals will soon rely on AI agents that act autonomously, making it critical that users own their models and that interactions between agents are secured and verified.

    https://near.ai – NEAR AI Cloud and Private Chat products are now live, try them here
    Illia's X: x.com/ilblackdragon
    Illia's Substack: ilblackdragon.substack.com
    NEAR X: x.com/nearprotocol

    00:00 Intro: AI and Information Control
    00:29 Meet Illia Polosukhin: Co-Author of 'Attention is All You Need'
    01:03 The Evolution and Impact of AI
    13:24 The Birth of Near AI and Blockchain Integration
    15:16 Challenges and Innovations in Blockchain and AI
    22:17 Privacy and Security in AI Applications
    26:58 Exploring Sleeper Agents in AI
    29:19 Practical AI Implementation in Teams
    30:06 AI's Role in Product Development
    31:41 Challenges and Future of AI in Development
    36:35 AI and Economic Alignment
    41:46 The Future of AI Agents
    44:14 Debrief

    📜 Read the transcript for this episode: Transcript of The Future Of AI With Illia Polosukhin: The Man Who Put The T In GPT |

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    55 m
  • AI’s Next Frontier: World Models Explained by Christian Keller
    Nov 27 2025

    In this episode, Christian Keller joins Henrik and Jeremy to explain how world models are shaping the next stage of generative AI. He talks through how AI learns using different types of inputs, and why video adds a sense of continuity, change, and cause and effect that text alone does not provide. Christian shares vivid analogies and clear examples to show what multimodal models make possible.

    The conversation moves into how AI is now used throughout the research process, from generating synthetic data to evaluating model outputs. Christian shares how this loop is already in motion and how AI is helping scale and accelerate experimentation. He also reflects on the shift after ChatGPT launched, and how that changed the pace and structure of research work.

    Later in the episode, Christian describes how individual workflows are evolving, and how asking simple questions like “Could AI help with this?” often opens new possibilities. He shares examples from his own work and home life, including how his wife built and graded her own French exercises using generative tools.

    Key Takeaways:

    • Text removes essential information
      Christian explains that text compresses reality and loses detail, context and temporality. Images and video help restore what text leaves out.
    • World models give AI a sense of change
      Video introduces the before and after and how things move or enter a scene. This helps models learn cause and effect and builds more robust understanding.
    • AI helps build AI
      Models can generate data, evaluate results and support researchers during development. Christian shows how this creates new ways of scaling experimentation and training.
    • Workflows shift when AI handles early steps
      Christian shows how tasks like debugging and prototyping change with generative tools, which reshapes roles and opens new opportunities for innovation.

    LinkedIn: Christian Keller | LinkedIn

    00:00 Intro: Information Compression
    00:37 Meet Christian Keller: AI Expert
    01:13 The Evolution of AI Products
    02:11 Impact of ChatGPT on AI Development
    02:38 Understanding PyTorch and Its Role
    07:41 The Bitter Lesson in AI
    09:12 Challenges and Future of AI Models
    18:57 Using AI to Build AI
    23:25 Innovative Chat Interfaces
    23:41 Building the Autos Platform
    24:35 Epiphanies in AI Integration
    25:18 AI in Entrepreneurial Workflows
    26:32 Challenges in AI Integration
    31:15 Bias in AI Models
    38:06 Debrief

    📜 Read the transcript for this episode: Transcript of AIs Next Frontier: World Models Explained by Christian Keller |

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    45 m
  • How Science Suggests You Change Your Organization - with Prosci’s Tim Creasey and Paul Gonzalez
    Nov 11 2025

    Generative AI is moving fast, but most organizations aren’t. Tim Creasey and Paul Gonzalez have spent their careers studying why. As leaders at Prosci, they’ve worked with thousands of teams navigating complex change, and in this episode they share what their research says about the human side of transformation.

    They discuss why traditional tactics like comms and training break down in the face of rapid AI adoption, and how successful organizations create the conditions for people to actually change. From hands-on leadership and peer-driven learning to the power of experimentation and the ADKAR model, this conversation is packed with practical tools and hard-earned insights.

    Tim and Paul also explore how AI is reshaping organizational structures, what “exposure hours” reveal about executive readiness, and why culture beats mandates every time. Whether you’re leading change or stuck inside it, this episode offers a grounded look at what actually works when everything is in motion.

    Key takeaways:

    • Bold vision is not enough - it also needs to be balanced
      The most effective AI leaders communicate both where the organization is going and what teams are doing right now to get there. Prosci’s research shows that near-term clarity matters just as much as long-term ambition.
    • Leaders need to use the tools themselves
      Tim and Paul introduce the idea of “exposure hours” as a leading indicator of readiness. The more time executives spend actively experimenting with AI, the better positioned they are to lead transformation.
    • Experimentation requires structure and safety
      Organizations can’t just tell people to try new things. They need to carve out time, reduce the stakes, and make experimentation a shared and visible part of how work gets done.
    • Real change still happens one person at a time
      Despite all the new tech, the fundamentals haven’t changed. Individuals need awareness, desire, knowledge, ability, and reinforcement to adopt new behaviors. Prosci’s ADKAR model remains essential for making change stick.

    LinkedIn: Prosci: LinkedIn
    Website: Prosci | The Global Leader in Change Management Solutions

    00:00 Introduction to Change Management and AI Adoption
    00:25 Meet the Experts: Tim Creasey and Paul Gonzalez
    01:51 The Challenges of Change Management
    04:07 Generative AI Transformation: Unique Challenges
    07:44 Key Ingredients for Successful AI Adoption
    15:18 Building a Culture of Experimentation
    20:43 The Role of Leadership in AI Transformation
    25:54 Future Organizational Designs with AI
    27:02 Disruptive Organizational Changes
    28:00 Examples of Innovative Enterprises
    28:15 Military Analogies in Business
    29:30 Challenges in Organizational Change
    30:36 Timeless Principles of Change Management
    31:36 The Role of Leadership in Change
    33:13 ADKAR Model for Change
    35:51 Addressing Resistance to Change
    40:05 Effective Communication Strategies
    47:48 Concluding Thoughts and Reflections

    📜 Read the transcript for this episode: Transcript of How Science Suggests You Change Your Organization - with Prosci’s Tim Creasey and Paul Gonzalez |

    For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:

    Henrik: https://www.linkedin.com/in/werdelin
    Jeremy: https://www.linkedin.com/in/jeremyutley

    Show edited by Emma Cecilie Jensen.

    Más Menos
    54 m