• Superintelligence

  • Paths, Dangers, Strategies
  • By: Nick Bostrom
  • Narrated by: Napoleon Ryan
  • Length: 14 hrs and 17 mins
  • 4.1 out of 5 stars (4,263 ratings)

Prime logo Prime members: New to Audible?
Get 2 free audiobooks during trial.
Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo after 30 days. Cancel anytime.
Superintelligence  By  cover art

Superintelligence

By: Nick Bostrom
Narrated by: Napoleon Ryan
Try for $0.00

$14.95/month after 30 days. Cancel anytime.

Buy for $24.95

Buy for $24.95

Pay using card ending in
By confirming your purchase, you agree to Audible's Conditions of Use and Amazon's Privacy Notice. Taxes where applicable.

Publisher's summary

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: We get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

©2014 Nick Bostrom (P)2014 Audible Inc.

What listeners say about Superintelligence

Average customer ratings
Overall
  • 4 out of 5 stars
  • 5 Stars
    2,042
  • 4 Stars
    1,217
  • 3 Stars
    668
  • 2 Stars
    211
  • 1 Stars
    125
Performance
  • 4 out of 5 stars
  • 5 Stars
    1,715
  • 4 Stars
    1,038
  • 3 Stars
    566
  • 2 Stars
    227
  • 1 Stars
    139
Story
  • 4 out of 5 stars
  • 5 Stars
    1,691
  • 4 Stars
    1,026
  • 3 Stars
    579
  • 2 Stars
    226
  • 1 Stars
    124

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    3 out of 5 stars
  • Performance
    2 out of 5 stars
  • Story
    4 out of 5 stars

Narration is horrible

Would you consider the audio edition of Superintelligence to be better than the print version?

No. This is not a novel and the narrator acts the book where I just want him to read the book. The narrator adds his own exclamation marks on words he thinks are important to highlight. The strong English accent (and I am British) makes the book feel highbrow in a bad way. Very annoying.

What was one of the most memorable moments of Superintelligence?

The owl fable in the beginning

Would you be willing to try another one of Napoleon Ryan’s performances?

No

If you were to make a film of this book, what would the tag line be?

Terminator (oh wait that one has been made already)

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

4 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Mindblowing and easy to read

Worth a read/listen if the prospect of AI both excites and scares you. The book gives a thorough look at all the different ways we might go about developing AI, and what might happen if we do succeed.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

1 person found this helpful

  • Overall
    3 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    2 out of 5 stars

interesting topic, disappointing execution

I feel Bostrom uses too many words to make fairly obvious points. I also found a fair bit of redundancy in the material presented at least in the first part (until I quit).

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

1 person found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    5 out of 5 stars

A book everyone should read multiple times

This topic becomes more relevant with each passing day. Technology will keep marching on, to an end that can either free humans from daily toil or destroy everything that makes us human today. There are 3 states of being for us humans; there are those that make things happen, those that watch things happen, and those that wonder what happened. This book will elevate you to watching and make you more capable of taking action. I will be purchasing it in hard copy.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

amazingly inspirational

definitely a must read on AI. but sometimes I think I would've been better off actually reading it instead of listening, since it's kind of a complicated subject matter. the narrator also has a strange accent (couldn't tell where from), but it kinda makes the book cooler somehow. maybe you will like, maybe you won't.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Crazy

Poses some shocking and critical philosophical questions, which makes it a very interesting book to read

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    4 out of 5 stars
  • Performance
    3 out of 5 stars
  • Story
    4 out of 5 stars

Good synthesis of analytic philosophy and computin

It's slightly challenging to summarize this book in a comprehensive manner, part of the book is a fairly detailed analytic "philosophical" treatment of how one would control and endow super intelligent artificial agents with values that are congruent with "society", and the adjacent challenge of controlling said agents. Other parts of the book focus on policy recommendations to governments and society on how we should guide AI research, specifically elaborating on scenarios of inter-company and inter-government competition with respect to developing artificial general intelligence (AGI). Included is a small historical prerequisite summary on the development of AI in the mid 20th century, first from attempting symbolic reasoning, "connectivism", or the focus on artificial neural networks, to the more recent paradigm of evolutionary/genetic algorithms, and of course now back to neural networks.

The book is dense with terminology, much of it new me, although, I get the feeling many of the ideas herein are novel up to just a few years ago. Bostrom's central thesis is that humanity is either a few years or a few decades away from developing an AGI that will be "AI-Complete", or an agent that can solve problems only by achieving human-level intelligence. From that moment, there will be a countdown to inevitable development of what Bostrom called "superintelligence", or an AGI that is many orders of magnitude greater than a human mind in either speed or collective intelligence.

Bostrom spends many hours diving into detail of how he believes the super intelligent (SI) agent will be achieved, mostly centering on the "seed AI" hypothesis.

The notion of the seed AI is that as AGIs are added to the development processes and research for AGI, they will decrease the development time per iterative cycle, accelerating the movement along the "progression curve" towards superintelligence. Once this initial "seed AI" is achieved, there will be an "intelligence explosion" making the production and development of super-intelligent agents relatively trivial.

This seems like a similar idea to the "singularity" thesis of Kurzweil, but Bostrom's analysis is much more detailed. In fact, there is even a detailed treatment of multiple scenarios discussed analyzing the ramifications of mulit-polar super intelligence agents, e.g. multiple nations develop more or less equivalent agents simultaneously, or the "singleton" scenario, where super intelligence is developed first in one locality. Also, there's even discussion on how a multi-party coordinates enterprise to develop SI would probably occur e.g. the UN.

Bostrom does a bunch of back-of-the-envelope reckoning to suggest that the Singleton scenario is the most likely and runs with this case for most of the book. All this discussion he characterizes as the "kinetics of take-off".

Simultaneously, the most interesting, rigorous, and speculative part of the book deals with the "control problem," or ensuring the SI does not go beyond the control of humanity, and accidentally (or intentionally) destroys humanity. This section utilizes formal tools from various subject matters including principal-agent analysis, general utility analysis, some machine learning, and suggest the issue could be a rich field for future mathematical/computational exploration.

According to Bostrom, developing SI requires the developer to constrain on 3 issues simultaneously: 1. Perverse instantiation 2. The Instrumental Convergence Thesis 3. The Value Loading problem.

Perverse Instantiation is the "genie problem" of human folklore and mythology, like in the story Aladdin. If you ask the SI to accomplish something, say making a person happy, the improper way for the SI to accomplish this task would be to connect invasive electrodes to the person's brain and stimulate segments of his brain to induce "happiness." The Instrumental Convergence Thesis is a hypothesis that there exists a kernel set of behaviours all intelligent agents will execute to survive (collect resources, set and achieve goals etc.), and the Value Loading Problem is the question on how the developer could ensure the goals of the SI are congruent with the values/goals of humanity. Further, how one could even define an aggregate set of values of humanity to start. The existence of this kernel set is called coherent extrapolated volition (CEV) hypothesis. If we assume that CEV exists, then we can move forward with searching some function space to characterize a solution for the Value Loading Problem, as a direct hash would be precluded by the curse of dimensionality, of action.

This is all dealt with in fairly good detail, especially for a non-monograph, and I'll have to listen to this again and/or read the direct source research before I'm convinced I have a real understanding of the material. Although basically, Bostrom is arguing that we should look to see if we can develop an SI agent that abides by humanities goals/values and can reach said goals/final desired states in a manner that is congruent to those values, without direct programming/encoding of a developer. He calls this the "indirect normativity construction." It's basically Asimov's 3 laws of robotics.

I think the book is worthwhile for at least one read. Recently a group of prominent AI researchers, including Andrew Ng have suggested worrying about these issues is like "worrying about overpopulation in Mars." Basically, it will be an issue at some point, but not for quite a while, and there are significant engineering issues to deal with in the meantime. I don't exactly agree, as dealing with some of these issues could also help propels development in the bread and butter fields of machine learning, so I don't view the w activities as mutually exclusive.

That being said, there is a similarity to thinking about these issues to the analysis of nuclear war, only more abstract as the SI does not yet exists. Several times I felt thus material would make good fodder for sci-fi writing or fluff for DMing some futuristic RPG, and little else. However, the longer I've sat with the ideas, the more coherent it seems to become, I don't think Bostrom is B.S. and he has a coherent rigour to his analysis.

To quote Bostrom, quoting his friend: "[...] A Fields Medal is a sign that the winner was capable of doing something important, but he (sic , they) didn't. The value of the discovery is not equal to the value of the information discovered, but rather the value of having that information earlier than otherwise." In this case, there is something there here, but is it valuable now? Is this like abstract algebra, basically just expanding the dictionary of theorems to make the study of the subject more onerous to study, yet providing little practical/immediately deploy-able apparatus, or is this more similar to Heisenberg ad-hoc inventing matrix algebra to characterize quantum mechanics? A just-in-time construction which yielded tremendous value.

I'm still up in the air on that question, but I do recommend this book, the reader is somewhat dry, and the material is dense, you'll have to listen to it twice and take notes, but it's a potentially important subject matter.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    5 out of 5 stars

Amazing book, but too complicated for audio.

I love this book. I first read it by listening on Audible, and it made a big impression on me. But I felt that I missed a lot of it, and that a lot of it didn't stick or sink in, because it's a little harder to stop the audio than it is to stop reading. So I bought it on kindle and read it again, and that time got much more out of it. So the book is wonderful, but much too rich, dense, and full of great ideas that you'll want to go over in your mind slowly for audio.

(minor quibble: the narrator insists on pronouncing all acronyms in terms of letters, which is not right--GOFAI should be pronounced "Goe-fai" and not "Gee Oh Eff Ey Ai.")

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    5 out of 5 stars

Incredibly important

This book outlines the arguments for the deliberate consideration of our role in potentially harmful future existential risk. It's carefully and wonderfully written. Nick Bostrom goes to great lengths to put forth his arguments on the subject of potentially harmful future technology.

The narration is at times slow but honestly it ends up being a good thing since all of the ideas in this book deserve special consideration.

I bought the hardcover after completing the audio version because I think that this book is one of the most important works of philosophy of my generation.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Extremely important

Bostrom is the foremost authority on the existential threats of this, the most powerful technology ever imagined.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!