Episodios

  • From AI Strategy to Execution: Trust, Leadership, and the Operational Reality of Healthcare AI | Brian Sutherland
    Feb 25 2026

    Send a text

    AI ambition isn’t the problem in healthcare. Execution is.

    In this episode of The Signal Room, Chris Hutchins sits down with Brian Sutherland, Lead AI Product Manager and advisor specializing in customer-facing AI for high-consequence healthcare environments.

    Brian built Humana’s first member-facing Intelligent Virtual Assistant — generating $7M+ in annual savings while improving patient experience and task completion. In this conversation, we move beyond AI hype and examine what actually breaks between executive strategy and operational reality.

    We explore:

    • Why AI pilots succeed but enterprise adoption stalls
    • Trust as infrastructure — not philosophy
    • The leadership shift required as AI embeds into clinical workflows
    • Where hype is outrunning evidence in healthcare AI
    • What responsible scale actually looks like

    If you are a healthcare executive, board member, digital health leader, or AI product owner, this episode is a grounded discussion on what it takes to move from ambition to accountable execution.

    Connect with Brian Sutherland on LinkedIn:
    https://www.linkedin.com/in/briandsutherland/

    Subscribe for practical conversations at the intersection of leadership, ethics, and healthcare innovation.

    Support the show

    Más Menos
    41 m
  • Why AI Verification, not Speed or Model Accuracy, is the Real Bottleneck in Pharmaceutical Drug Discovery
    Feb 18 2026

    Send a text

    AI is transforming drug discovery—but faster models alone do not get drugs approved.

    In this episode of The Signal Room, host Chris Hutchins speaks with David Finkelshteyn, CEO of Pivotal AI, about why verification—not speed or model accuracy—is the real bottleneck in pharmaceutical AI.

    David explains why generating AI-designed molecules without rigorous validation creates more risk than value, especially in regulated environments like pharma and healthcare. The conversation breaks down where AI outputs most often fail between discovery and regulatory acceptance, why black-box models struggle under scrutiny, and what it actually means to verify an AI insight in drug development.

    They also explore practical challenges around data integrity, auditability, missing context, hallucinations, and the growing use of consumer AI tools in health decisions. Rather than chasing hype, this episode focuses on how AI can responsibly accelerate drug development by failing faster, tightening verification loops, and building systems that can be defended to regulators, auditors, and clinicians.

    This episode is essential listening for leaders working in pharmaceutical R&D, healthcare AI, data science, AI governance, and regulated technology environments.

    Guest: David Finkelshteyn, CEO, Pivotal AI
    LinkedIn: https://www.linkedin.com/in/david-finkelshteyn-03191a130/

    Support the show

    Más Menos
    38 m
  • No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'
    Feb 11 2026

    Send a text

    This episode explores ethical leadership and AI governance challenges in healthcare cybersecurity, emphasizing the risks of undetected breaches.'

    In this episode of The Signal Room, Chris Hutchins speaks with Guman Chauhan, a cybersecurity and risk leader, about one of the most dangerous conditions in modern organizations: being breached and not knowing it. While dashboards stay green and alerts stay quiet, attackers increasingly operate using valid credentials, normal behavior patterns, and long dwell times—remaining invisible for weeks or months.

    Guman explains why “no alerts” is often mistaken for “no breach,” and why silence is one of the most misleading signals in cybersecurity. The conversation unpacks how attackers deliberately avoid detection, why security tools alone do not equal security outcomes, and where organizations create blind spots through untested assumptions, alert fatigue, and fragmented processes.

    They explore why undetected breaches are more damaging than known ones, how time compounds risk once attackers are inside, and what separates organizations that mature after incidents from those that repeat the same failures. Guman emphasizes that proven security is not built on policies, certifications, or dashboards—but on continuous testing, validated detection, and teams that know how to act under pressure.

    This episode is a practical guide for executives, security leaders, healthcare organizations, and regulated enterprises that need to move from assumed security to proven breach readiness.

    Guest: Guman Chauhan
    LinkedIn: https://www.linkedin.com/in/guman-chauhan-m-s-cissp-cism-600824103/

    Topics Covered

    • Why undetected breaches are more dangerous than known breaches
    • How attackers use valid credentials to avoid detection
    • Why “no alerts” does not mean “no breach”
    • Alert fatigue and the signal-to-noise problem
    • Security tools vs security outcomes
    • Visibility gaps, unknown assets, and logging failures
    • External penetration testing and real-world validation
    • Cultural and leadership factors in breach response
    • Assumed security vs proven security

    Key Takeaways

    • Silence is not security; it often means you are not seeing the right signals.
    • Most breaches go undetected because attackers behave like legitimate users.
    • Security tools do not fail—untested assumptions do.
    • Alert fatigue hides real risk by normalizing noise.
    • Proven security requires testing detection and response end to end.
    • Mature organizations treat breaches as learning moments, not events to hide.
    • Confidence without validation creates the most dangerous blind spots.

    Chapters / Timestamps

    00:00 – Why undetected breaches are the real risk
    02:30 – Being breached vs being breached and not knowing
    06:00 – How attackers stay invisible using valid credentials
    08:30 – Why dashboards and alerts create false confidence
    10:00 – Common reasons breaches go undetected for months
    13:30 – Security tools vs security outcomes
    16:00 – Technology, process, and people failures
    19:30 – Alert fatigue and finding real signals
    22:30 – Why external penetration testing still matters
    26:30 – What mature organizations do after a breach
    31:00 – One action to improve breach readiness this year
    32:45 – The uncomfortable question every leader should ask
    34:30 – Assumed security vs proven security
    36:30 – How to connect with Guman & closing

    Support the show

    Más Menos
    34 m
  • Scaling Care with AI: Balancing Human Judgment and Clinical Trust in Healthcare
    Feb 4 2026

    Send us a text

    What does it truly mean to scale care with AI inside a real hospital environment? In this episode of The Signal Room, host Chris Hutchins talks with Mark Gendreau, emergency physician and Chief Medical Officer, about the intersection of healthcare AI, ethical leadership, and AI strategy. Together, they discuss how AI is transforming clinical workflows by amplifying human judgment rather than replacing it.

    They explore real-world applications in healthcare AI such as radiology co-pilots, ambient clinical documentation, and workflow intelligence designed to relieve clinician burnout. Dr. Gendreau highlights the need for responsible AI and human oversight in high-reliability healthcare settings.

    The conversation also covers critical topics like AI governance, clinical trust, alert fatigue, and leadership accountability. Listeners will gain insights into why successful AI adoption in healthcare depends on culture and ethical leadership, not just technology.

    This episode is essential for healthcare leaders, clinicians, informaticists, and policymakers seeking practical guidance on AI readiness, ethical AI practices, and driving AI strategies that improve patient care while maintaining human judgment at the core.

    Key Takeaways

    • AI delivers the most value when it amplifies clinicians, not when it attempts to replace them
    • Human judgment is essential in high-risk clinical decisions, even with advanced AI support
    • Ambient documentation can dramatically reduce after-hours EHR work (“pajama time”)
    • Alert fatigue is a governance problem, not just a technical one
    • Trust in AI is built through reliability, transparency, and clear ethical intent
    • Successful AI adoption depends more on leadership and culture than IT execution
    • Interoperability and governance are the biggest barriers to scaling AI across health systems
    • Emotional intelligence, empathy, and shared decision-making remain human responsibilities

    Guest Info

    Mark Gendreau, MD, MS, CPE
    Emergency Medicine Physician | Chief Medical Officer

    Dr. Gendreau is an experienced emergency physician and healthcare executive with deep expertise in clinical operations, patient safety, and responsible AI adoption. He focuses on using technology to improve access, quality, and clinician experience while preserving the human core of medicine.

    🔗 LinkedIn: https://www.linkedin.com/in/markgendreaumd/

    Chapters (YouTube & Spotify)

    00:00 – Introduction and framing the AI scaling challenge
    01:18 – Workforce scarcity and why AI must amplify clinicians
    02:10 – AI in radiology: co-pilots, fatigue reduction, and safety
    05:26 – Ambient documentation and eliminating “pajama time”
    07:17 – Using AI to improve clinician communication and empathy
    09:33 – Where AI falls short and why humans must stay in the loop
    12:44 – Guardrails, trust, and human-AI partnership
    13:44 – Trust in AI vs trust in human relationships
    16:07 – Adoption curves and clinician buy-in
    18:05 – Why AI fails when treated as an IT project
    20:41 – Leadership’s role in shaping AI culture
    22:07 – Interoperability, governance, and scaling challenges
    26:04 – Signals that an organization is truly AI-ready
    29:26 – Emotional intelligence and where AI should never lead
    33:59 – Alert fatigue and governance accountability
    37:27 – Measuring success: outcomes, equity, and pajama time
    38:36 – How to connect with Dr. Gendreau
    39:31 – Episode close

    Support the show

    Más Menos
    34 m
  • From AI Hype to Real Value: Crafting AI Strategy That Delivers Real Business Impact
    Jan 28 2026

    Send us a text

    In this insightful episode of The Signal Room, host Chris Hutchins and guest Parth Gargish dive deep into building effective AI strategies that go beyond the hype to deliver real business value. With extensive experience in SaaS and AI-driven product development, Parth shares practical insights on developing AI-first approaches that prioritize ethical leadership, responsible AI adoption, and workforce readiness.

    Listeners will learn why successful AI in healthcare and other industries depends on strong leadership accountability, transparent communication, and establishing trust throughout the AI transformation process. The discussion highlights how targeted AI use cases can maximize ROI, focusing on solving business problems rather than chasing flashy technology demos.

    Key themes include AI governance, ethical AI practices, upskilling teams, and balancing human decision-making with AI capabilities. This episode is essential for healthcare leaders and AI experts looking to implement AI strategies that are both impactful and ethically sound.

    Join us as we explore how ethical leadership and responsible AI practices drive real value in AI adoption and help organizations navigate the complex landscape of AI in business strategy and healthcare.


    Key Takeaways

    • AI success starts with people and process, not tools
    • Small, targeted AI use cases often deliver the highest ROI
    • AI should enable teams, not replace human decision-making
    • Leadership transparency is critical during AI transitions
    • Real value comes from solving business problems, not showcasing technology

    Discussion Themes

    • AI-first strategy versus AI experimentation
    • Separating hype from real enterprise use cases
    • Workforce trust, upskilling, and change management
    • SaaS, customer support automation, and operational efficiency
    • Leadership accountability in AI adoption

    Guest Contact & Links

    LinkedIn: https://www.linkedin.com/in/parth-gargish-0803b897/

    Community: SaaS NXT (North American SaaS founder community)

    Support the show

    Más Menos
    26 m
  • Why Healthcare AI Fails Without Complete Medical Records: Interoperability, Transparency & Patient Access
    Jan 21 2026

    Send us a text

    Healthcare AI cannot deliver precision medicine without complete, interoperable medical records, which are essential for responsible AI implementation in healthcare. In this episode, recorded live at the Data First Conference in Las Vegas, Aleida Lanza, founder and CEO of Casedok, shares insights from her 35 years as a medical malpractice paralegal on why fragmented records and inaccessible data continue to undermine care quality, safety, and trust in healthcare AI.

    We dive deep into why interoperability must extend beyond the core clinical record to include the full spectrum of healthcare data—images, itemized bills, claims history, and even records trapped in paper or PDFs. Aleida argues that patient ownership and transparency of their health information, a critical element of healthcare ethics, are key to overcoming these challenges and enabling ethical leadership in healthcare AI.

    This episode also highlights the significant risks posed by missing data bias in healthcare AI, explaining how incomplete records prevent AI systems from accurately detecting patient needs. Aleida outlines how complete medical record transparency and safe AI collaboration can transform healthcare from static averages to truly personalized, informed care, aligning with principles of ethical AI and responsible AI deployment.

    If you're involved in healthcare leadership, AI strategy, data governance, or healthcare ethics, this episode offers valuable perspectives on AI readiness, healthcare AI regulation, and the urgent need to improve interoperability for better patient outcomes.

    Key topics covered

    • Why interoperability must include the entire medical record
    • Patient ownership, transparency, and access to health data
    • The hidden cost of fragmented records and repeated history-taking
    • Why static averages fail patients and clinicians
    • Precision medicine vs static medicine
    • Safe AI deployment without hallucination or data leakage
    • Missing data as the most dangerous bias in healthcare AI
    • Emergency access to complete history as a patient safety issue
    • Medicare, payer integration, and large-scale access challenges

    Chapters

    00:00 Live from Data First Conference
    01:20 Why interoperability is more than clinical data
    03:40 Fragmentation, static medicine, and broken incentives
    05:55 Why AI needs complete patient history
    08:10 Missing data as invisible bias
    10:55 Emergency care and inaccessible records
    12:40 Patient ownership and transparency
    14:30 Precision medicine and AI safety
    16:10 Why patients should own what they paid for
    18:30 How to connect with Aleida Lanza

    Stay tuned. Stay curious. Stay human.

    #HealthcareAI #Interoperability #PatientData

    Support the show

    Más Menos
    16 m
  • AI Ethics & Ethical Leadership in Healthcare: Building Trust Without Losing Humanity
    Jan 14 2026

    Send us a text

    Recorded live at the Put Data First AI conference in Hollywood, Las Vegas, this episode of The Signal Room features a deep conversation between Chris Hutchins and Asha Mahesh, an expert in AI ethics, ethical leadership, and responsible data use in healthcare. The discussion goes beyond hype to examine what it truly means to humanize AI for care and build trust through ethical leadership and sound AI strategy.

    Asha shares her personal journey into ethics and technology, shaped by lifelong proximity to healthcare and a commitment to ensuring innovation serves patients, clinicians, and communities. Together, they explore how ethical AI in healthcare is not just a policy document, but a way of working embedded into culture, incentives, and daily decision-making.

    Key themes include building trust amid skepticism, addressing fears of job displacement, and reframing AI adoption through a 'what's in it for you' lens. Real-world examples from COVID vaccine development show how AI, guided by purpose and urgency, can accelerate clinical trials without sacrificing responsibility.

    The conversation also discusses human-in-the-loop systems, the irreplaceable roles of empathy and judgment, and the importance of transparency and humility in healthcare leadership. This episode is essential listening for healthcare leaders, life sciences professionals, and AI practitioners navigating the ethical crossroads of trust and innovation.


    Chapters with Keyword-Rich Descriptions

    00:00 – Live from Put Data First: Why AI Ethics Matters in Healthcare
    Chris Hutchins opens the conversation live from the Put Data First AI conference in Las Vegas, framing why ethics, privacy, and trust are amplified challenges in healthcare and life sciences.

    01:05 – Asha’s Path into AI Ethics, Privacy, and Life Sciences
    Asha shares her personal journey into healthcare technology, data, and AI ethics, shaped by early exposure to hospitals, science, and real-world impact.

    03:00 – Human Impact as the North Star for Healthcare AI
    Why improving patient outcomes, not technology novelty, must guide AI strategy, data science, and innovation decisions in healthcare.

    04:30 – Humanizing AI for Care: Purpose Before Technology
    A discussion on what “human-centered AI” really means and how intention and intended use define whether AI helps or harms.

    06:20 – Embedding Ethics into Culture, Not Policy Documents
    Why ethical AI is not a checklist or white paper, but a set of behaviors, incentives, and ways of working embedded into organizational culture.

    07:55 – COVID Vaccine Development: AI Done Right
    A real-world example of how data, machine learning, and predictive models accelerated clinical trials during the pandemic while maintaining responsibility.

    10:15 – Mission Over Technology: Lessons from the Pandemic
    How urgency, shared purpose, and collaboration unlocked innovation faster than tools alone, and why that mindset should not require a crisis.

    12:20 – The Erosion of Trust in Institutions and Technology
    Chris reflects on declining trust in government, healthcare, and technology, and why AI leaders must now operate from a trust deficit.

    14:10 – Fear and AI: Addressing Job Loss Concerns
    A practical conversation on why fear of AI replacing jobs persists and how leaders can reframe AI as support, not replacement.

    16:30 – “What’s In It for You?” A Human-Centered Adoption Framework
    How focusing on individual value, workflow relief, and personal benefit increases trust and adoption of AI tools in healthcare and life sciences.

    18:00 – How Human Should AI Be?

    Support the show

    Más Menos
    22 m
  • Why Healthcare Isn’t Ready for AI Yet | Emotional Readiness, Just Culture & Leadership Trust
    Jan 7 2026

    Send us a text

    Healthcare can’t be technologically ready for AI until it’s emotionally ready first.

    In this episode of The Signal Room, host Chris Hutchins sits down with Susie Brannigan — a trauma-informed nurse executive, Just Culture leader, and AI ethics advocate — to explore the human readiness gap in healthcare transformation.

    Susie explains why trust must be rebuilt before new systems (Epic, AI, automation) can succeed, and how leaders can shift culture from blame to learning, from burnout to belonging. Drawing from real unit experience and frontline realities, she breaks down what emotionally safe leadership looks like during implementation, why “pilot” language often erodes credibility, and how Just Culture + trauma-informed leadership create the psychological safety required for change.

    We also discuss where AI can genuinely help clinicians (and where it can go too far), including guardrails for empathy, presence, and patient-facing AI interactions. If you’re leading digital transformation, managing workforce fatigue, or trying to implement AI without losing your people, this conversation is a practical guide.

    Key topics covered

    • The human readiness gap: emotional readiness before technological readiness
    • Trust erosion in healthcare leadership and why it blocks adoption
    • Epic implementation lessons: skill gaps, overtime, and unit-level support
    • What Just Culture is and how it reduces fear and turnover
    • Trauma-informed leadership and psychological safety on high-acuity units
    • Emotional intelligence alongside data literacy as a core leadership skill
    • Designing AI with empathy, guardrails, and clinical accountability
    • Practical advice for leaders: rounding with purpose, supporting staff, choosing sub-leaders

    Chapters

    00:00 Emotional readiness and the human readiness gap
    01:10 Why implementations fail without trust
    07:20 Epic vs AI: why this shift feels different
    09:10 What Just Culture is and why it works
    11:20 Trauma-informed leadership and secondary trauma
    19:40 Emotional intelligence in tech-driven environments
    22:10 AI, empathy, and guardrails for patient-facing tools
    29:30 Coaching and simulation: preparing nurses for crisis care
    34:40 Leadership advice for AI-era change
    38:20 How to connect with Susie Brannigan
    42:10 Closing

    Connect with Susie Brannigan

    • LinkedIn: Susie Brannigan
    • Business page: Susie Brannigan Consulting
      (Susie shares culture assessments, Just Culture training, trauma-informed training, and leadership support across healthcare and other industries.)

    If this episode resonated, share it with a leader who’s trying to implement change without losing trust. The future of healthcare transformation depends on psychological safety.

    Stay curious. Stay human.

    #JustCulture #HealthcareLeadership #AIinHealthcare

    Support the show

    Más Menos
    38 m