Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: We get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
©2014 Nick Bostrom (P)2014 Audible Inc.
l'enfer c'est les autres
This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.
The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.
The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.
The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.
This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.
There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.
I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).
This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.
There is not much math in this book, not many pictures or tables. Usually this is a good indicator that I'll be able to follow along in an audio version. That was not true of this book. I listen to audiobooks while doing menial tasks involving infrequent and brief moments of concentration, with most books I am able to do this easily, but this book requires some pondering and digestion. Any distraction seemed to be enough to miss something important. Perhaps some of this was due to narrator's smooth baratone which - for reasons I don't know - I didn't like. I plan on getting the hard copy and reading this one in silence. This book is definitely a must read, but it also seems it must be read slowly. Put it down, think about it, talk about it with your friends, then and only then on to the next chapter.
Every chapter is more or less the author proposing an idea/prediction, and then exhaustively defining and constraining the solution space for that idea. .e.g AI could be done via method X, which would enable A, B, C, D, but would exclude J, K, L, M, N, etc..
Except for that's done over an hour.
So, every detail is treated very well, and it's an interesting process, but near the end I just couldn't take it any more and had to skip parts. :)
Nick Bostrom's, Superintelligence takes you on a journey through a sea of terminology and educated predictions to provide a stark and clear picture of the problems we face as a species as we approach singularity. The book is easy enough to work though and much more theoretical and practical than technical. Absolutely worth a read/listen for anyone worried or curious about how, when, or why machine intelligence will change humanity.
I wish it was, but it only takes a couple of minutes before my mind starts wandering and the narrator is just idle background noise.
Read the book instead of listen to it.
The narrator speaks clearly and eloquently but the tone and meter were just impossible for me to enjoy. He didn't appear to be at all interested or passionate about the subject matter and instead just sounded like he was reading a script full of Star Trek technobabble and was just completely bored.
The book is worth the listen because it is a very good and thorough exposition of one of the major technological problems and risks approaching us in the very near future. Anything that can bring Popular awareness of this and similar issues is a great value.
On the down side the author is so committed to voicing the scholarly non-committal tone that he fails to make definite statements about any topic, even when he could do so.
At times there are logical fallacies in the arguments, and assumptions about the nature of Artificial Intelligences that appear to be groundless, and are not supported by explanation.
There is also a tendency to quote and rely on a variety of "Celebrity" Experts, who have track records in Technology that more recently have led them down allies of almost clownish obsolescence in one case, and over-confidence leading to fallacies and mistakes in their work in the other case.
I would not take this book as 'gospel' on Super-Intelligence. Rather it is a worthwhile entry into the current fieldwork on the subject, such as it is.
Great content which only occasionally failed to keep my attention. Probably a very important book none the less. Recommended to all wanting to learn and not necessarily be entertained
My mind definitely was blown by a bunch of ideas I encountered here, so I recommend this book on that basis. That said, it moves slowly because the author invests so much energy it being thorough and always making the technically correct statement (lots of: if, likely to, maybe, under the condition etc. etc.) it's rigorous, but in my view unnecessary because I'm a forgiving reader and don't need all statements to be qualified.
I am about 1/4 of the way through this book and am not sure I will be able to remember anything about it because the narration is completely mismatched and is extremely distracting. The narrator has a great voice for fiction, but the delivery is annoying for this genre. There's just no way I can stand to finish this book.
"Deeply Insightful and very thorough. Bad narration"
For anyone interested in AI it is a must-read as it covers many possible scenarios that the reader would have never been able to imagine without consulting this book.
However it's not ideal for beginners. Bostrom introduces the concept of superintelligence assuming that the reader is familiar with artificial intelligence, and quickly moves onto scenarios and existential risk.
The narrator makes no effort to put emotions into what he says. Every sentence, and every statement sound the same no matter what the topic is about. Listening to it is more akin to a text-to-speech narration than a storyteller.Admittedly, the Swedish syntax of short sentences does not help. Nevertheless, narration could be greatly improved.
This is a factual book. But the reader reads it like it's Lord of the rings! It's really distracting and I have no idea how this was signed off.
Must concur with previous reviewers: How did this ridiculous, 'over-dramatized' narration get past the quality control before release? Pity on such a substatial text. I have tried to tolerate it for a chapter or two, but must unfortunately give up. A re-issue with suitable narration is to be hoped for.
"The most/last important book you'll ever read"
This is an intelligent, passionate and thoughtful book for a general, educated audience. Its hard at times, but saving humanity usually is hard.
I've followed Bostrom's academic writing for sometime on matters relating to existential risk. He's a cogent antidote to conspiracy theories, taking seriously our own and nature's capacity for human extinction.
This book outlines the likelihoods and timescales of different technologies creating an intelligence orders of magnitude beyond our own; the possible outcomes, good and bad, for humanity; and ways we can manage and mitigate the effects. In essence, it's, message is that sooner or later we will likely create an intelligence vastly beyond our own and without careful planning (say, not encoding this intelligence to optimize what we humans care about - freedom of choice, minimizing pain, beauty etc) we could very likely be superseded, if not destroyed.
It's all speculative, of course, as is any book about the future. But it's foolish not to plan for rainy days. This is one of those books that humbles you; makes your daily battle against confectionary or anxiety over relationships or vanity about your position in society seem petty.
"Buy the physical book"
Someone directed to do a less hammy and over-dramatic performance of what is a non-fiction book.
Had I known the book makes many references to figures in the print version, I wouldn't have downloaded.
"Sci-fi without the fiction"
Realistic and scientific predictions of our exciting sci-fi-like future, very interesting, no stupid ends like in movies.
Could be much shorter, some parts are repeated several times, though it's probably good for better context.
"Great book with thoughtful considerations"
I loved the first few chapters, this gave a great outline of where we are up to with the relevant technologies and what the obstacles are for progression. I would recommend buying the book for this even if it consisted of only the first 4-5 chapters.
My only criticism would be that the author fixated a little on the idea of a superintelligence with a very simple goal system e.g. making as many paperclips as possible. My own view is that in the process of recursive self improvement the AI's goal system would develop in line with the rest of it's intellect and it would end up with more sophisticated, not more simplistic goals than humans. This could of course bring it's own risks and is inherently unpredictable, but doesn't necessarily equate to the default of existential catastrophe asserted in the book.
It is of course expected that there will be different views on this and that my own may be wrong.
Report Inappropriate Content