The week X's Grok AI went Nazi
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
In the rapidly growing world of generative AI chatbots, Grok stands out. Created by Elon Musk's xAI and touted as a "politically incorrect," "anti-woke" alternative to models like ChatGPT, Grok has become a pervasive presence on Musk's social media platform X. So a lot of people took notice earlier this month when Grok started spouting anti-Semitic stereotypes, making violent sexually charged threats, and dubbing itself "MechaHitler."
xAI says it has fixed the issue that was introduced in a recent update, but the incident has raised concerns about the apparent lack of guardrails on the technology — particularly when, a week later, the company launched personal AI "companion" characters that included a female anime character with an X-rated mode, and won a contract with the U.S. Department of Defense worth $200 million USD.
Kate Conger — a technology reporter with the New York Times and co-author of the book Character Limit: How Elon Musk Destroyed Twitter — explains what led to Grok's most recent online meltdown and the broader safety concerns about the untested tech behind it.
For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts