When AI Cannibalizes Its Data Podcast Por  arte de portada

When AI Cannibalizes Its Data

When AI Cannibalizes Its Data

Escúchala gratis

Ver detalles del espectáculo
Asked ChatGPT anything lately? Talked with a customer service chatbot? Read the results of Google's "AI Overviews" summary feature? If you've used the Internet lately, chances are, you've consumed content created by a large language model. These models, like DeepSeek-R1 or OpenAI's ChatGPT, are kind of like the predictive text feature in your phone on steroids. In order for them to "learn" how to write, the models are trained on millions of examples of human-written text. Thanks in part to these same large language models, a lot of content on the Internet today is written by generative AI. That means that AI models trained nowadays may be consuming their own synthetic content ... and suffering the consequences.

View the AI-generated images mentioned in this episode.

Have another topic in artificial intelligence you want us to cover? Let us know my emailing shortwave@npr.org!

Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at
plus.npr.org/shortwave.

To manage podcast ad preferences, review the links below:

See pcm.adswizz.com for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences.

Learn more about sponsor message choices: podcastchoices.com/adchoices

NPR Privacy Policy
Todas las estrellas
Más relevante
This podcast delves into the interesting topic of how large scale learning models cannot use itself as a reference due to AI’s inability to interpret the data it collects.

Very fascinating podcast highlighting AI

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.