Will AGI Replace Us? The Terrifying Math of the AI Control Problem
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Spoiler alert: It's not just sci-fi paranoia anymore; it's practically math. In this episode, we are reacting to and unpacking a mind-bending interview from the This is the World YouTube channel featuring renowned physicist Anthony Aguirre. And trust us, his take on Artificial General Intelligence (AGI) will make you look at your smart speaker a whole lot differently.
Aguirre drops a massive reality check: the current race to build AGI isn’t about empowering humanity—it’s about replacing it. We dive deep into his fascinating (and slightly terrifying) argument that separates the helpful AI tools of today from the autonomous, goal-driven superintelligence of tomorrow. But Aguirre doesn't just use tech jargon; he uses the literal laws of physics to explain our impending doom. By applying the second law of thermodynamics to the "AI control problem," he mathematically proves that a superintelligent system has a million ways to destroy our world, and very few ways to save it.
In this episode, we break down the internet’s biggest questions about the AGI threat:
- The Physics of Doom: How does the second law of thermodynamics prove that uncontrolled AI will naturally lead to catastrophe?
- Empowerment vs. Replacement: Why the tech billionaires' race for Autonomous General Intelligence is fundamentally at odds with human survival.
- The Consciousness Trap: Do these systems have true awareness, or are we just projecting humanity onto code? (And why our lack of a scientific theory of consciousness makes this so dangerous).
- The Control Problem Explained: What happens when an AI develops superior strategic depth and its own goals?
👇 Let’s shape the future together!
If this episode made you think, question, or slightly panic (in a good way!), hit that Subscribe button so you never miss our deep dives! Share this episode with a friend who loves geeking out over tech and philosophy, and drop a comment or a 5-star review to let us know: Do you think AGI is conscious? Let's get the debate started below!
Become a supporter of this podcast: https://www.spreaker.com/podcast/thrilling-threads-conspiracy-theories-strange-phenomena-unsolved-mysteries-etc--5995429/support.
You May also Like my other FREE web apps:
SkyNearMe.com – Your all-in-one "Sky Super-App." Track real-time weather, sunset and air quality, stargazing conditions, 5G signal mapping, drone flight zones, solar potential, track satellites, rocket launches, UFO sightings in your local airspace and even get your Sky Horoscope and more!
MyDisasterPrepKit.com – Gamified survival training. Generate custom survival plans and simulate scenarios ranging from hurricanes to zombie outbreaks.
🤖Nudgrr.com (🗣'nudger") - Your AI Sidekick for Getting Sh*t Done
Nudgrr breaks down your biggest goals into tiny, doable steps — then nudges you to actually do them.
Todavía no hay opiniones