Ghosts in the Training Data: When Old Breaches Poison New AI Podcast Por  arte de portada

Ghosts in the Training Data: When Old Breaches Poison New AI

Ghosts in the Training Data: When Old Breaches Poison New AI

Escúchala gratis

Ver detalles del espectáculo

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

In this narrated edition of Ghosts in the Training Data: When Old Breaches Poison New AI, we explore how years of incidents, leaks, and scraped datasets quietly shape the behavior of your most important models. You will hear how stolen code, rushed hotfixes, crooked incident logs, and brokered context move from “someone else’s breach” into the background radiation of modern AI platforms. This Wednesday “Headline” feature from Bare Metal Cyber Magazine focuses on leaders’ concerns: trust, accountability, and how much control you really have over the histories your models learn from.

The episode walks through the full arc of the article: how breaches refuse to stay in the past, how contaminated corpora become ground truth, and how defensive AI built on crooked histories can miss what matters. It then shifts to business AI running on stolen or opaque context, before closing with a practical framing for governing training data like a supply chain. Along the way, you will get language to talk with boards, vendors, and internal teams about data provenance, model risk, and the leadership moves that turn invisible ghosts into visible dependencies you can actually manage.

Todavía no hay opiniones