Hacking AI Podcast Por  arte de portada

Hacking AI

Hacking AI

Escúchala gratis

Ver detalles del espectáculo

Obtén 3 meses por US$0.99 al mes + $20 crédito Audible

AI has brought incredible new capabilities into everyday technology, but it's also creating security challenges that most people haven't fully wrapped their heads around yet. As these systems become more capable and more deeply connected to the tools and data we rely on, the risks become harder to predict and much more complicated to manage.

My guest today is Rich Smith, who leads offensive research at MindGard and has spent more than twenty years working on the front lines of cybersecurity. Rich has held leadership roles at organizations like Crash Override, Gemini, Duo Security, Cisco, and Etsy, and he's spent most of his career trying to understand how real attackers think and where systems break under pressure.

We talk about how AI is changing the way attacks happen, why the old methods of testing security don't translate well anymore, and what happens when models behave in ways no one expected. Rich also explains why psychology now plays a surprising role in hacking AI systems, where companies are accidentally creating new openings for exploitation, and what everyday users should keep in mind when trusting AI with personal information. It's a fascinating look behind the curtain at what's really going on in AI security right now.

Show Notes:
  • [01:00] Rich describes getting into hacking as a kid and bypassing his brother's disk password.
  • [03:38] He talks about discovering Linux and teaching himself through early online systems.
  • [05:07] Rich explains how offensive security became his career and passion.
  • [08:00] Discussion of curiosity, challenge, and the appeal of breaking systems others built.
  • [09:45] Rich shares surprising real-world vulnerabilities found in large organizations.
  • [11:20] Story about discovering a major security flaw in a banking platform.
  • [12:50] Example of a bot attack against an online game that used his own open-source tool.
  • [16:26] Common security gaps caused by debugging code and staging environments.
  • [17:43] Rich explains how AI has fundamentally changed offensive cybersecurity.
  • [19:30] Why binary vulnerability testing no longer applies to generative AI.
  • [21:00] The role of statistics and repeated prompts in evaluating AI risk and failure.
  • [23:45] Base64 encoding used to bypass filters and trick models.
  • [27:07] Differentiating between model safety and full system security.
  • [30:41] Risks created when AI models are connected to external tools and infrastructure.
  • [32:55] The difficulty of securing Python execution environments used by AI systems.
  • [35:56] How social engineering and psychology are becoming new attack surfaces.
  • [38:00] Building psychological profiles of models to manipulate behavior.
  • [42:14] Ethical considerations and moral questions around AI exploitation.
  • [44:05] Rich discusses consumer fears and hype around AI's future.
  • [45:54] Advice on privacy and cautious adoption of emerging technology.

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.

Links and Resources:
  • Podcast Web Page
  • Facebook Page
  • whatismyipaddress.com
  • Easy Prey on Instagram
  • Easy Prey on Twitter
  • Easy Prey on LinkedIn
  • Easy Prey on YouTube
  • Easy Prey on Pinterest
  • Mindgard
  • Rich.Smith@Mindgard.ai
Todavía no hay opiniones