Episodios

  • The Plausibility Problem: What AI Hallucinations Mean for Healthcare
    Mar 21 2026

    Generative AI is now embedded in healthcare documentation, communication, and coding. As adoption accelerates, hallucination risk shifts from novelty to operational exposure. This piece explores the “confidence gap” behind AI hallucinations and reframes them not as isolated glitches, but as signals of integration maturity. Building on prior discussions around pilot purgatory and orchestration sovereignty, we examine how designing for expected imperfection allows organizations to engineer trust at scale.


    Más Menos
    20 m
  • Moving Fast Without Breaking Trust: Balancing Speed and Safety in Healthcare x AI
    Jan 26 2026

    Healthcare is being asked to move faster with AI while maintaining trust, safety, and reliability. This piece explores why speed and safety are not opposing forces in healthcare AI, and how thoughtful system design allows organizations to innovate quickly without breaking confidence or care. In previous pieces we've discussed "pilot purgatory" and "orchestration sovereignty" among others. Today we talk about the balancing act between speed and safety and utilize the term "deliberate speed."

    Más Menos
    15 m
  • Understanding How AI Transforms Healthcare: From Machine Learning to Generative Models
    Jan 26 2026

    Over the past year, I’ve noticed that many of the challenges organizations encounter with AI are not technical in nature, but conceptual. As we enter a new year, that pattern is becoming harder to ignore. We often talk about “AI” as a single capability, when in practice it represents a set of layered technologies with very different strengths, risks, and operational implications.

    Más Menos
    15 m
  • Getting Started With AI: Lessons From the Computer Era
    Dec 4 2025

    Healthcare is entering a moment that feels new, but isn’t unfamiliar. We’ve lived through this kind of transition before. When computers first arrived in hospitals, they were confusing, unstructured, and intimidating.....yet people learned them one small step at a time. The same pattern is emerging with AI today.Across clinical teams, operations, payers, and life sciences, the professionals who take simple, practical steps with AI are gaining clarity, reducing cognitive load, and building confidence. Those waiting for perfect instructions, perfect governance, or perfect readiness are finding themselves stuck at the starting line.

    Más Menos
    12 m
  • AI Education in Healthcare
    Nov 20 2025

    The next competitive advantage in healthcare will not come from acquiring more AI systems. It will come from developing a workforce that understands how to use those systems with clarity, safety, and confidence.

    Más Menos
    11 m
  • Building Trust, Teamwork, and Courage in the Age of Healthcare x AI
    Nov 5 2025

    As artificial intelligence transforms healthcare, the real challenge isn’t technology....it’s trust. This article explores how healthcare leaders can move from fear to shared confidence by leading with empathy, teamwork, and curiosity. A common statement I get feedback about regarding this podcast is the amount of uncertainty revolving around AI. The future of AI in healthcare will belong to those willing to learn, listen, and lead together.


    Más Menos
    13 m
  • Why Healthcare’s AI Winners Won’t Be the Best Predictors, They’ll Be the Best Orchestrators
    Oct 24 2025

    Artificial intelligence is rapidly shaping the next phase of healthcare transformation. Yet across hospitals and health systems, the results remain uneven. Predictive models routinely perform well in pilots but fail to deliver sustained clinical or operational impact. The difference between promise and performance no longer lies in algorithm design; it lies in how organizations act on what those algorithms predict.

    Más Menos
    20 m
  • The Confidence Trap
    Oct 24 2025

    Why healthcare organizations must treat verification as a core operational discipline, not a procedural checkbox. Through real-world case studies, we show how AI creates invisible failure modes, why LLMs invert the traditional learning curve, and what executives must do to ensure adoption delivers measurable value without exposing the enterprise to hidden liabilities.

    The opportunity is clear, those who combine AI speed with domain rigor will thrive. Those who confuse plausibility for reliability will not.

    Más Menos
    15 m