Earley AI Podcast - Episode 85: AI Security, Shadow IT, and the Governance Reset with Rob Lee Podcast Por  arte de portada

Earley AI Podcast - Episode 85: AI Security, Shadow IT, and the Governance Reset with Rob Lee

Earley AI Podcast - Episode 85: AI Security, Shadow IT, and the Governance Reset with Rob Lee

Escúchala gratis

Ver detalles del espectáculo


Why Security Teams Are Being Asked to Do Three New Jobs - and What to Do About It

Guest: Rob Lee, Chief AI Officer and Chief of Research at SANS Institute

Host: Seth Earley, CEO at Earley Information Science

Published on: March 27, 2026

In this episode, Seth Earley speaks with Rob Lee, Chief AI Officer and Chief of Research at SANS Institute, about why AI governance is broken in most organizations - and what it actually takes to fix it. They explore why security teams are being asked to simultaneously govern, adopt, and defend AI, why the default framework of no is driving shadow IT rather than preventing risk, and what a practical reset of AI governance actually looks like. Rob also shares why agents should be treated like workers rather than software, and why executives cannot afford to outsource their understanding of AI to anyone else.

Key Takeaways:

  • Security teams are now being asked to do three new jobs at once - evaluate AI tools for the organization, drive their own AI transformation, and manage governance and regulatory compliance.
  • The default framework of no does not prevent AI use - it drives it underground, creating shadow IT that is far harder to monitor and control than sanctioned tools.
  • Governance needs a stoplight model - green means experiment freely, yellow means involve security as a lifeguard, red means stop - with the default answer being yes unless there is a clear reason to say no.
  • AI governance documents written before generative AI arrived are already outdated - most say nothing about agentic workflows, human-in-the-loop requirements, or connector permissions.
  • Agents should be treated like workers, not software - they reason, improvise, and operate 24-7, which means they require the same zero-trust principles, oversight structures, and ethical guardrails as human employees.
  • Executives cannot outsource their understanding of AI to security teams - AI literacy at the C-suite level is a competitive requirement, not an optional capability.
  • Good governance is not about documenting every possible bad outcome - it is about establishing overarching goals and building a culture of trust with enough guardrails to prevent the truly stupid risks.

Insightful Quotes:

"The framework security teams are using is a framework of no. And that framework of no is causing people to use AI secretly, regardless of what the security team says." - Rob Lee

"An agent in the future - and some organizations are already treating it this way - is a worker. Everything you ask about governing agents, replace that with a human who just got hired. The same rules apply." - Rob Lee

"You can't automate what you don't understand - and with agents, the stakes are even higher. An agentic mistake isn't a wrong paragraph, it's a blocked critical system." - Seth Earley

Tune in to discover how security and executive leaders can move from a governance posture of restriction to one that enables innovation, manages real risk, and keeps organizations competitive in the age of agentic AI.

Links:

LinkedIn: https://www.linkedin.com/in/leerob/

Website: https://www.sans.org

Sponsor: Vector - https://www.vktr.com/


Thanks to our sponsors:

  • VKTR
  • Earley Information Science
  • AI Powered Enterprise Book
Todavía no hay opiniones