Authentic Intelligence: Designing Responsible AI for Healthcare
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Send us a text
This episode of The Signal Room dives into AI strategy and readiness, focusing on the design principles that truly matter in healthcare AI. Chris sits down with Keshavan Shashadri, Senior Machine Learning Engineer, for a grounded conversation on authentic intelligence, AI systems designed to understand context, respect human judgment, and recognize their limits.
Together, they explore why context is crucial in healthcare AI and where it often breaks down, from patient history and clinical workflows to institutional policy, regulation, and human availability. Keshavan outlines four critical layers of context necessary for building AI systems that are trusted, safe, and effective.
The discussion covers how large language models (LLMs) are not replacements for doctors, the importance of AI supporting rather than supplanting clinical judgment, and the need for human-in-the-loop checkpoints where risk is significant. It also distinguishes between transparency and true explainability in regulated environments and highlights that AI bias often arises from what it doesn't know rather than what it does.
This episode is a practical, ethical, and strategy-driven discussion on deploying responsible AI in healthcare leadership. If you're invested in healthcare ethics, AI regulation, and designing AI systems that earn trust, this conversation is essential listening.
Support the show