The Four Pillars of Trustworthy AI—and Who Owns Them
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Trust in AI isn’t a vibe—it’s something you can intentionally design for (or accidentally break). In this episode, Galen sits down with Cal Al-Dhubaib to unpack “trust engineering”: a shared toolkit that helps cross-functional teams (engineering, UX, governance, risk, and business) talk about the same trust risks in the same language. They get into why “boring AI is safe AI,” how guardrails and human handoffs actually preserve trust, and why the biggest failures often aren’t the model—they’re the systems (and incentives) wrapped around it.
You’ll also hear real-world examples of trust going sideways—from biased outcomes to hallucinated “gaslighting,” to AI-assisted deliverables causing accuracy issues—and what project leaders can do to prevent finger-pointing when it happens.
Resources from this episode:
- Join the Digital Project Manager Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Cal on LinkedIn
- Check out Further
- AI Incident Database