
If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast

Compra ahora por $22.49
-
Narrado por:
-
Rafe Beckley
"May prove to be the most important book of our time.”—Tim Urban, Wait But Why
The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit
Listeners also enjoyed...




















Reseñas de la Crítica
“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."—Max Tegmark, author of Life 3.0: Being Human in the Age of AI
“If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”—Tim Urban, cofounder, Wait But Why
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, former CEO of Reddit
Las personas que vieron esto también vieron:


















This book came out today. I started reading this morning and finished this afternoon. I was excited to read it, to say the least, and Yudkowsky and Soares didn't disappoint.
If you don't think AI is the biggest threat facing humanity, read this book. If you do, buy copies for your friends and family. Let's hope the authors are wrong and fight as if they're right.
Excellent explainer for the most important problem
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Everyone should be able to agree, given the points made in this book, that the current state of AI development is reckless. This is the Silent Spring of the 21st century, and we have far less time to react. Even if you are familiar with the topic, reading the book and the accompanying online resources will probably bring to the forefront just how dangerous and critical of an issue this is.
Everyone needs to read this book.
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
This book if read and understood and taken with the seriousness it deserves, could turn out to be more important than the Bible, the Quran, the Veda, Principia, Wealth of Nations, Das Kapital combined.
If people get this book it’s the most important book in the history of the known universe.
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
It's hard to understate how important it is for this message to reach the right decision-makers and thought-leaders.
These individuals need to hear and understand the central message of this book: Please stop. Don't just slow down, or lean on some techno-optimist crutch, or blindly accept ultimatums like "Embrace AI or get out". Stop!
And learn WHY - by reading and understanding the argument in this book.
This book is an alarm, and everyone needs to understand why it is wailing
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Most important book of all times
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
We are a long way off from solving what's known as the alignment problem.
If anyone released artificial superindulgence into the world using anything like our current process, it would almost certainly be misaligned with The interests of humankind.
Artificial super intelligence would have the capability to radically alter, and even extinguish human (and all) life on Earth.
Once it had been released, we would not be able to stop it, contain it, or fix it should it decide to hurt us (or do anything we don't like).
We are moving towards super intelligence with reckless abandon, rather than an appropriate sense of caution.
Since we don't know how close we are to achieving super intelligence, the only reasonable way to reduce the existential threat is to globally pause research towards super intelligence.
I sincerely hope this book fuels the global discussions on AI safety.
Might turn out to be the most important book of our time
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
The vision
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
A Must Read
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
Thought provoking and important
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.
The authors argue (in a readable way, and with deep intellectual integrity and attention to detail) that this tech is more like 100% likely to kill us, and that we therefore shouldn't build it.
I don't say this sort of thing often -- I usually try not to be involved in political advocacy or conversation, lest it make all of us scared or crazy. But for this one issue: please read it, if you care what happens for human life on Earth. And please bring in normal common sense and talk about it with your friends. Don't let companies hypnotized by science and power decide the whole future for all of us forever, at least not without noticing first.
(I also liked the narrator; he made it easy for me to understand the content, and to be not too jostled by it emotionally. I had an easier time understanding the audiobook than the written book, but I personally often prefer audio content.)
Good narrator; Vital book
Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.