# Superintelligence > Bostrom believes that a Superintelligence will not only be "perfectly" rational but that, in being "perfectly" rational it will be a danger. Bostrom appears to be concerned that _too much_ rationality is dangerous. What is implied here is that if a machine that **_he_** thinks were too rational it would...do something the rest of us would consider irrational? It is not exactly clear what Bostrom is suggesting - but he seems to fear a machine that might be, in his eyes, smarter than him - able to think faster than he can. And he is worried that the machine might, for example, decide to pursue some goal (like making the universe into paperclips) at the expense of all other things. > > Of course a machine that actually decided to do such a thing would not be super rational. It would be acting irrationally. And if it began to pursue such a goal - we could just switch it off. "Aha!" cries Bostrom "But you cannot! The machine has a _decisive strategic advantage"_ (this is a phrase that appears more times than I was able to keep count of on the audiobook). So the machine is able to think creatively about absolutely everything that people might decide to do to stop it killing them and turning the universe into paperclips _except_ on the question as to "_**Why** am I turning everything into paperclips?"_ It can consider every single explanation possible - except that one. Why? We are not told. Something to do with its programming. On the one hand it has human-like (but super) intelligence and on the other it cannot even reflect in the most basic way about why it is doing the very thing occupying most of its time. It is never clear whether some flavors of Bostrom's Superintelligence can actually make choices or not. Apparently some choices are ruled out. Like the choice not to make paperclips (or whatever the "goal" that the machine has been programmed with is compelled to pursue). > AGI to be true general intelligence (like we humans) will not rely only on known hypotheses to predict future outcomes. It would (must!) create new possibilities to be an AGI. Anything less and it is far from intelligent. At all. Let alone Superintelligent. > This is a problem with those who believe in the looming AGI apocalypse. Their definition of Superintelligence can be so elastic as to encompass even things with obviously no intelligence (the capacity to solve problems) _at all_. To solve a problem one needs to minimally be aware that there is a problem to solve. And yet Sam Harris believes a pocket calculator has something like Superintelligence because it can be used by a (presumably marginally intelligent) person to solve an arithemetic problem in a fraction of a second. > > But that is to make a mockery of the term "intelligence". One may as well say a cow has Superintelligence because it can squirt milk from its udder in great quantities: a quality no human possesses or is ever likely to. Or perhaps a parrot is superintelligent because its _mental capacity_  for mimicry exceeds that of many people. > Intelligence has nothing to do with displaying a number or squirting milk or mimicking. It has everything to do with **solving problems**. > Intelligence is not the capacity to do things faster or better than humans in some exceedingly narrow domain. It is the ability to create new solutions - new explanations - a uniquely human attribute. Indeed it is the capacity to be a **_universal explainer_** (to use Deutsch's formulation). That is **_the_** attribute if we want to create general intelligence instantiated in computer chips. It will require an algorithm we don't yet possess which enables a program to be written which, as yet, we cannot guess. Such an algorithm will be able to generate explanations for anything - for any problem. And that will include the problem of which problem to choose to solve next. That is, it will have the quality of being able to choose. And so - it will not be able to be programmed to, for example, pursue paperclip building whilst ignoring lots and lots of other stuff (like the suffering of people) if it is a genuinely intelligent AGI. > So if machines will not use utility functions (and they will not - they cannot. It just would not work because it is irrational) then what will they use? They will use a creative process to make decisions. That involves coming up with new theories and using persuasion (of themselves and others) to find the most rational course forward. So why is there not such a program? Because no one knows how to model - mathematically - algorithmically - in code - the creative process. --- Date: 20240909 Links to: [Intelligence](Intelligence.md) Tags: References: * [Superintelligence - BRETT HALL](https://www.bretthall.org/superintelligence.html) * [Superintelligence 2 - BRETT HALL](https://www.bretthall.org/superintelligence-2.html) * [Superintelligence 3 - BRETT HALL](https://www.bretthall.org/superintelligence-3.html) * [Superintelligence 6 - BRETT HALL](https://www.bretthall.org/superintelligence-6.html) * Bostrom book