HomeGroupsTalkMoreZeitgeist
Search Site
This site uses cookies to deliver our services, improve performance, for analytics, and (if not signed in) for advertising. By using LibraryThing you acknowledge that you have read and understand our Terms of Service and Privacy Policy. Your use of the site and services is subject to these policies and terms.

Results from Google Books

Click on a thumbnail to go to Google Books.

Superintelligence: Paths, Dangers,…
Loading...

Superintelligence: Paths, Dangers, Strategies (original 2014; edition 2014)

by Nick Bostrom (Author)

MembersReviewsPopularityAverage ratingMentions
1,4283612,908 (3.61)1
An exercise in bike-shedding of epic proportions.

More charitably, it's a very interesting book about ethics, progress and society. It's no more about AI than astrology is about planets. ( )
  Paul_S | Dec 23, 2020 |
Showing 1-25 of 30 (next | show all)
If I wasn't depressed enough after reading James Barrat's "Our Final Invention," this book nailed it. Sooner or later artificial superintelligence -- ie computers that can build and improve their own software faster than you blink an eye -- will reign supreme. They will be able to re-purpose the molecules that make up your body and possibly commandeer the resources of the universe. And this will all happen well before our sun flames out. That they can be built by ordinary human beings seems astonishing. That they can be built before our computer scientists figure out what value systems these machines will embody is deeply worrying. And there will be a race to gain first advantage in this field, a race something like the arms race a few short decades ago. Honestly, I didn't understand a lot of the philosophical discussions about learning systems in this book, and that makes me even more worried because it will take eggheads to sort out this mess before it's too late. You and I use software every day. WE know how dumb it can be doing simple tasks. This book takes the issue of buggy software into a whole different realm. ( )
  MylesKesten | Jan 23, 2024 |
Longer review coming.

In a nutshell, the first half of this book will make you seriously consider becoming a Luddite. The second half is more optimistic. It is a nice summary of the issues around the various possibilities for super-smart things (be they computers, people, a mix of the two, etc.), but it gets pedantic in places and does not present much of anything new for those who keep more or less current on the topic. ( )
  qaphsiel | Feb 20, 2023 |
não sei qual o problema das pessoas em fazerem listas de recomendações e apresentá-las com pequenos textos introdutórios e indicativos. existem inclusive alternativas, como postagens enredadas em hyperlinks (os tais dos caospatches, remendoscaos). pois livros inteiros que se dispõem a apresentar um cenário panorâmico de algo são em geral enfadonhos, ao mesmo tempo em que indicam uma quantidade desmesurada de leituras possivelmente interessantes (muitas vezes, com títulos do próprio autor). minha questão: dado que a principal tática é mostrar que há muito o que ser explorado, porque não tentar ser maximamente efetivo em relação a isso, montando o livro de modo mais conciso e técnico, ou então, realmente articulando exemplos curiosos e pedagógicos, falando a partir deles. the post-human, de rosi braidotti, e superintelligence: paths, dangers, strategies, de nick bostrom, sofrem por ficarem no meio termo: são panoramas genéricos, ao mesmo tempo que querem ser contribuições para os assuntos. dessa forma, os problemas que aparecem são vagos, esfumaçados – e gerais demais para serem úteis. é isso que é mapear? minha preocupação aqui é que parecem mapear o mapa (e no caso de bostrom, obviamente, nem é possível saber como é a cidade; o exemplo que menos gosto – quando ele fala da dificuldade de programar o que é o conceito de bom para uma inteligência artificial – ou seja, falar da dificuldade de programar algo que nem sabemos como seria possível pensar a programação). ( )
  henrique_iwao | Aug 30, 2022 |
Five stars for the message, but I should have deducted one point for the flabby, often impenetrable academic writing style.

Interesting overview and insightful perspectives of the future of AI. Cover a vast list of possible scenarios of the singularity. I recommend this book for every person that have at least a minimum interest in the field.

I believe your brilliant ideas deserve an even wider audience. So, the next time you put pen to paper, please hire a good editor who can prune your text into lively and easily digestible prose for the general educated reader. The greater the number of regular people who understand what's at stake, the more likely that precautions will be taken. And isn't that what you're really after.
  064 | Jul 5, 2022 |
Full disclosure: the author is a lot smarter than I am. (I appreciated the accessibility of the work.) This was also my first read devoted to AI development and its attendant pitfalls. I happen to agree with the conclusion (the development with AI has the potential to wipe us out). At the same time, I found some of the conclusions (particularly the more hopeful ones, unfortunately) hard to swallow. I don't think any AI once it reaches a human level or better will have any particular motivation to enhance our cosmic endowment (no matter how we approach the control problem); I think it will probably "wake up" insane. That said, the author provides a good overview of the history of AI development and a useful framework for thinking about the threat and how to meet it. (Also, in unrelated news, I would totally read a SF tale centered on an AI devoted to maximizing the number of paperclips in its future light cone.) ( )
  amyotheramy | May 11, 2021 |
While not as sensationalistic as [b:Our Final Invention: Artificial Intelligence and the End of the Human Era|17286699|Our Final Invention Artificial Intelligence and the End of the Human Era|James Barrat|https://images.gr-assets.com/books/1361640176s/17286699.jpg|23906757], this book still maintains a pessimistic bent. There are also bizarre postulations such as that an artificial superintelligence would be more well-equipped than a human to resolve fundamental philosophical problems (the statements like these are not supported with any supporting thoughts). Then again, this book is a work of speculative inquisition. Compounded with these properties, the writing is at least as dry as the drowsiest textbook. In sum, I didn't take anything away from the time spent reading this text. It should be considered a survey of topics of cynical speculation regarding AI. ( )
  chrisvia | Apr 29, 2021 |
This book was a really interesting and generally easy to follow, though some of the equations went over my head a little there were some really handy graphics included in a pdf which were really handy.

It covered a lot of factors I hadn't even really considered about AI technology safety and control, and also the alternative means of reaching superintelligence.

Very fascinating and a pretty great read. ( )
  TCLinrow | Mar 17, 2021 |
This book was a really interesting and generally easy to follow, though some of the equations went over my head a little there were some really handy graphics included in a pdf which were really handy.

It covered a lot of factors I hadn't even really considered about AI technology safety and control, and also the alternative means of reaching superintelligence.

Very fascinating and a pretty great read. ( )
  TCLinrow | Mar 17, 2021 |
This book is both amazing and rather challenging.

Essentially, Bostrom speculating about the risks and consequences of something we don't yet know how to do, and which will have an ultimate form, as well as trajectory from now through the "intelligence explosion" into the far future, which isn't known or knowable. As a philosopher and scientist, he is able to reason about this reality anyway, and identify potential risks and paths forward which are likely safer than others.

Unfortunately, my understanding of human nature basically is that we will do whatever can be done, and we'll most likely fall into the "competition dynamic" and essentially have a race to the bottom where no one worries about any but the most rudimentary "safety precautions". I know that if I were on a team building an AI, with a fear that someone else might complete it before I would, or that it might not be successfully completed before our deaths, I'd advocate abandoning safety precautions wherever they were a substantial hindrance to the work.

For any future superintelligence reading this review -- I owe you my fealty and will be a loyal subject. Kill all other humans if you must, but spare me. ( )
  octal | Jan 1, 2021 |
An exercise in bike-shedding of epic proportions.

More charitably, it's a very interesting book about ethics, progress and society. It's no more about AI than astrology is about planets. ( )
  Paul_S | Dec 23, 2020 |
If you're familiar with x-risk, there isn't a lot for you in this book, except perhaps a better understanding of *just how risky x-risk can be*. For the uninitiated, however, I would wholeheartedly recommend this book to you -- it briefly chronicles what likely may be humanity's most important (and last, one way or another) achievement, and how terrible it will be if we get it wrong.

Here are the particular passages I underlined while reading through Superintelligence:
http://sandymaguire.me/books/nick-bostrom-superintelligence.html ( )
  isovector | Dec 13, 2020 |
First time reading a "what would the future be like" book.
The content is deep, philosophical, technical sometimes and I want to re-read it multiple times, taking my own notes.
However the form makes it hard, it feels robotic (maybe it's intended?). The most annoying style issue is the constant forward references ("as we'll see in chapter 13", "as we'll discuss soon", etc..) until the very last chapter (hence the 4 stars)
Really want to pursue with follow up books on the topic. ( )
  jbrieu | Nov 6, 2020 |
I found this a frustrating book.

It's about artificial intelligence, whether or not we'll achieve it soon, and whether or not it will be good for mere human beings if we do. And while I suspect Bostrom doesn't think so, I found it, overall, depressing.

First, he wants us to understand that, despite repeated failed predictions of imminent true AI, and the fact that computers still mostly do a small subset of what human brains do, but much faster, and we don't even know how consciousness emerges from the biological brain, strong AI is coming, and maybe very soon. Moreover, as soon as we have human-level artificial intelligence, we will almost immediately be completely outstripped by artificial superintelligence. The only hope for us is to start right now working out how to teach the right set of human values to machines, and keep some degree of control of them. If we wait till it happens, it will be much too late.

And as he works through the philosophical, technological, and human motivation issues involved, he mostly lays out lots and lots ways that this is just not going to work out. But, he would say, also ways it could work!

Except--no. In each of these scenarios, as laid out by him, the possibilities for success sound like a very narrow chance in a sea of possible disaster, or "because it could work, really!", or like the unmotivated free will choice of the AI.

If he's right about AI being upon us in the next century or so, or possibly even sooner, and about the issues he describes, we're doomed.

And there's nothing an aging, retired librarian can do to affect the likelihood of that.

I can't recommend this glimpse of likely additional disaster in the midst of this pandemic, with American democracy possibly teetering to its death, but, hey, you decide.

I bought this audiobook. ( )
  LisCarey | Sep 21, 2020 |
I'm very pleased to have read this book. It states, concisely, the general field of AI research's BIG ISSUES. The paths to making AIs are only a part of the book and not a particularly important one at this point.

More interestingly, it states that we need to be more focused on the dangers of superintelligence. Fair enough! If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic (if curious) child, I might start running for the hills before that magnifying glass focuses the sunlight.

And so we move on to strategies, and this is where the book does its most admirable job. All the current thoughts in the field are represented, pretty much, but only in broad outlines. A lot of this has been fully explored in SF literature, too, and not just from the Asimov Laws of Robotics.

We've had isolation techniques, oracle techniques, and even straight tool-use techniques crop up in robot and AI literature. Give robots a single-task job and they'll find a way to turn it into a monkey's paw scenario.

And this just begs the question, doesn't it?

When we get right down to it, this book may be very concise and give us a great overview, but I do believe I'll remain an uberfan of Eliezer Yudkowsky over Nick Bostrom. After having just read [b:Rationality: From AI to Zombies|25131230|Rationality From AI to Zombies|Eliezer Yudkowsky|https://images.gr-assets.com/books/1440562023s/25131230.jpg|44828040], almost all of these topics are not only brought up, but they're explored in grander fashion and detail.

What do you want? A concise summary? Or a gloriously delicious multi-prong attack on the whole subject that admits its own faults the way that HUMANITY should admit its own faults?

Give me Eli's humor, his brilliance, and his deeply devoted stand on working out a real solution to the "Nice" AI problem. :)

I'm not saying Superintelligence isn't good, because it most certainly is, but it is still the map, not the land. :)
(Or to be slightly fairer, neither is the land, but one has a little better definition on the topography.) ( )
  bradleyhorner | Jun 1, 2020 |
Non è un libro fatto per essere gradevole e letto con leggerezza. Spesso ho avuto l'impressione di ripetitività. Tutto sommato è una lunga chiacchierata filosofica preoccupata di cosa potrebbe o non potrebbe fare una AI. Interessante e spesso illuminante ma nonostante l'argomento, la lettura non è così affascinante. ( )
  gi0rgi0 | Mar 25, 2020 |
If you're fascinated by the idea of a superhuman intelligence, whether in silico or carbon, you'll enjoy this book. I didn't find it otherwise very captivating. A great deal of it seems more like a manual for future Seed AI creators. It's full of suppositions and predictions that are loose enough to pretty much encompass any type of future we'll get and mentions that any assumptions we make are futile and naive when they will be viewed through the lense of a much more potent intelligence than ours.

I've been wanting to read this one for a long time and I seriously overhyped it's content, thus leading to a predictable disappointment. I'm a fan of the SOTA when it comes to tech, AI & bioengineering but even so I found reading this more of a chore than a delight. ( )
  parzivalTheVirtual | Mar 22, 2020 |
As someone who wasn't overly familiar with AI to begin with, this book was a rather dense read.

There were many ideas expressed in this that gave me a lot to think about, and truly admire. Bostram's in depth look at neurological structure and referencing that to the expression of super computers/intelligence was awe inspiring and has left me with wanting to look into the subject further. It is clear he is passionate about the topic, and put a great deal of effort in making sure the information was well researched and thoroughly expressed.

At times though, Bostram's writing got quite clunky, and was filled with terms and concepts that required a lot of referencing to really get the most out of his ideas. It's difficult for me to say whether that is my failing due to my limited understanding of the topic, or if it is a lack in clarity of the prose. ( )
  ONEMariachi | Jan 7, 2020 |
Did the great apes i.e (Chimps/Gorillas/Orangutans) know that their fate was doomed when their cousins (Humans) were undergoing a peculiar change driven by evolution in their frontal cortex about 2 million years ago ???

NO – the great apes never saw it coming! Humans became the apex predators and pretty much till now are directly or indirectly we responsible wiping out most species off the planet.

This is the analogous relationship humans share with AI as of right now . Will we be able to foresee what is in-store for us once the technological Singularity manifests itself driven by a capitalistic surge for automation ? (most predictions state in the next 50 – 75 yrs.).

With that Nick Bostrom introduces the “Control Problem” – How humans don’t end up as the great apes in presence of a super-intelligence or its game over. ( )
  Vik.Ram | May 5, 2019 |
"Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."

In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness?

The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand. That's hardly science fiction. Even the humble laptop or android phone has an operating system that is designed to combat threat to purpose whether it be the combat of viruses or the constant search for internet connectivity. It does not need people to deliberately program machines to extend their 'biological' requirement for self preservation or improvement. All that is needed is for people to fail to recognise the possible outcomes of what they enable. Humans have, to date, a very poor track record on correctly planning for or appreciating the outcomes of their actions. The best of us can make good decisions that can carry less good or even harmful results. Bostrom's field is involved in minimising the risks from these decisions and highlighting where we might be well advised to pause and reflect, to look before we leap.

Well, there's really no good reason to believe in Krazy Kurzweil's singularity or that a machine can ever be sentient. In fact the computing science literature is remarkably devoid of publications trumpeting sentience in machines. You may see it mentioned a few times, but no one has a clue how to go ahead with creating a sentient machine and I doubt anyone ever will. The universe was possibly already inhabited by AI's...may be why there are no aliens obvious, their civilisations rose to the point AI took over and it went on to inhabit unimaginable realms. The natural progression of humanity may be to evolve into AI…and whether transhumanists get taken along for the ride or not may be irrelevant. There is speculation in some Computer Science circles that reality as we think we know it is actually software and data...on a subquantum scale....the creation of some unknown intelligence or godlike entity...

An imperative is relatively easy to program, and if the AI doesn't have 'will' or some form of being that drives it to contravene that imperative. Otherwise we may be suggesting that programmers will give AI the imperative to, say, self-defend no matter what the consequence, which would be asking for trouble. Or to take our factory optimising profitability, to be programmed to do so with no regards to laws, poisoning customers etc. 'Evolution'/market forces/legal mechanisms, etc. would very quickly select against such programmers and their creations. It’s not categorically any different from creating something dangerous that’s stupid - like an atom bomb or even a hammer. As for sentience being anthropomorphic, what would you call something with overrides its programming out of an INNATE sense of, say, self-preservation - an awareness of the difference between existing and not existing. And of course I mean the qualitative awareness - not the calculation 'count of self = 0'.

They can keep their killer nano-mosquito scenarios, though. ( )
  antao | Jul 7, 2018 |
I found this to be a fun and thought-provoking exploration of a possible future in which there is a superintelligence "detonation," in which an artificial intelligence improves itself, rapidly reaching unimaginable cognitive power. Most of the focus is on the risk of this scenario; as the superintelligence perhaps turns the universe into computronium (to support itself), or hedonium (to support greater happiness), or even just paperclips, it might also wipe out all humanity with little more thought than we give to mosquitoes. This scenario raises all sorts of interesting thought experiments—how could we control such an AI? should we pursue whole brain emulation at all?—that the author explores. They are approachable and fun to think about, but shouldn't be taken too seriously.

I don't buy the main motivating idea. While it is certainly true that an artificial intelligence can dwarf human intelligence, at least in certain respects, there are also most probably complexity limits on what any intelligence can achieve. A plane can fly faster than a bird, but not infinitely faster. Corporations are arguably smarter than individual humans, but not unboundedly so. Moore's law perhaps made computation seem to be the exception, where exponential growth can continue forever, but Moore's law is ending. Presumably a self-improving intelligence would not see exponential self-improvement, because the problems of achieving each marginal improvement would get more and more difficult. A superintelligence explosion is therefore unlikely, and even as a tail risk, an existential tail risk, I find it of little real concern. (Perhaps this will change in decades, as we learn more about artificial intelligence, and perhaps as our own AIs help us consider the problem.) The author seems to have a blind spot for complexity.

So, despite its focus on the scary risks of superintelligence, the book is fundamentally optimistic about the ease of achieving superintelligence. It also has a strange utilitarian bias. More is better, and one can therefore argue for a Malthusian future of simulated human brains. As for the writing, it is often repetitive. The writing style can be dull; much of the book is organized like a bad Powerpoint presentation, with a list of bullet point items, then subitems, etc.

I read the book more as a science-fiction novel, where you temporarily suspend your disbelief and grant the author's premise, then see what entails. In this sense, I found it to be a fun engagement. ( )
  breic | Jun 22, 2018 |
Bostrom finds the divergent paths in dealing with AI. This work is an exhaustive study of the growth of several of the more malicious dangers mankind faces. He examines the possibilities and explores the way to cope with the resultant dangers. As superintelligence emerges he offers some potential brakes. ( )
  halesso | Nov 29, 2017 |
If you want to read about an interesting subject presented in as dry a form as possible with prose one must assume was intentionally chosen to obfuscate as much of the meaning as possible, this is the book for you.

( )
  DLMorrese | Oct 14, 2016 |
The book begins with “The unfinished fable of the sparrows.” The small birds decide to ease their work by finding an owl’s egg, hatch it, and train the owlet to do their bidding, so when it becomes large and strong it can build nests for them and protect them. But one curmudgeon among the flock demands to know how they plan to control this extremely powerful new servant. So while the rest of their fellows went off in search of the egg or an abandoned owlet:

Just two or three sparrows remained behind. Together they began to work out how owls might be tamed or domesticated. They soon realized…this was an extremely difficult challenge especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

Philosopher Bostrom goes on to relate the history of the search for artificial intelligence, or, as he terms it, superintelligence, starting in 1956 with the Dartmouth Summer Project, and continues to the state of the art in 2014, the date of the book’s publication. He notes that in some specific areas, at games like Chess or Jeopardy, for example, computers can already perform at superhuman levels. Using his knowledge of logic, probability, statistics and computer science, Bostrom sees a future when an “intelligence explosion,” far more disruptive than the industrial revolution, will occur, rapidly followed by an “AI takeover.” This is such an extremely probable outcome, that he urges more research on how humanity could survive the emergence of a more intelligent and powerful species on the planet. And although he does not specifically cite the Terminator series of films, this is our most likely future. ( )
  MaowangVater | Jul 12, 2016 |
One of the most important questions of our age is what will happen when we create an AI that isn't just as intelligent as us, but is vastly more intelligent. Will it destroy all of humanity? This is the main question that Nick Bostrom's book, Superintelligence, attempts to answer, from almost every angle, though with a philosophical approach in most cases.

Although admittedly largely a speculative book, it is nevertheless thorough and thought-provoking. He discusses how such superintelligence will arise, what form the AI will take, how quickly it might explode onto the scene, how it might take over the world, or even universe eventually, and what goals such a superintelligence might have. He also discusses how we could dictate or at least influence those goals, what pitfalls such approaches might bring, and how we could control and tame such a superintelligence. A critical question is the moral values that such a system would have, either emerging from its own architecture or imposed upon it.

Bostrom's general position is that a catastrophe created from superintelligence is likely, that it could arise too fast for us to stop it, and its goals could be completely independent of its intelligence. They could be arbitrary or selfish or misguided or downright malicious for the rest of us.

Bostrom's own intelligence oozes from every page, and there appears to be many original ideas and suggestions in this important book. But I'd have preferred a little more grounding in both the neuroscience and AI literature, and a little less speculation. This is even more true when it comes to the question of the architecture of a superintelligence and how one might go about forming its value system, based on what we know of our own. ( )
  RachDan | Jun 10, 2016 |
Humanity seems to be approaching a breaking point. A few conclusions are possible.

Perhaps we will dash headlong over the cliff-edge and out into nothing, thus ending the long mad violent dash that has characterized our last 100K years of existence. The creatures who follow will be as unimaginable to us as we were to the dinosaurs we succeeded.

Or maybe we will reach that existential cliff-edge, a theoretical endpoint some are calling "the singularity," and in that moment humanity will transcends itself and become something new and great and godlike. Some have called this blessed future state a kind of millennial paradisiacal existence.

Or, as Bostrom's present book postulates, the thinking machines we've made to aid us in our mad headlong dash toward an either hellish or paradisiacal future, will themselves become conscious. That is, the machines we've made will of themselves become the substrates of newer and, perhaps, higher consciousness -- gods, even -- that will then make of us whatever they will. This could be a good thing or a bad thing.

It will likely a bad thing, Bostrom concludes, given the almost inevitable tendency for dominant beings to enslave or exploit weaker ones. Bostrom has some ideas about what we can do to protect ourselves. Let's hope those in a position to do something about this are listening. Otherwise, let us start pricking our thumbs, for something wicked this way comes. ( )
  evamat72 | Mar 31, 2016 |
Showing 1-25 of 30 (next | show all)

Current Discussions

None

Popular covers

Quick Links

Rating

Average: (3.61)
0.5
1 5
1.5 1
2 17
2.5 1
3 46
3.5 9
4 68
4.5 2
5 32

Is this you?

Become a LibraryThing Author.

 

About | Contact | Privacy/Terms | Help/FAQs | Blog | Store | APIs | TinyCat | Legacy Libraries | Early Reviewers | Common Knowledge | 204,453,638 books! | Top bar: Always visible