Superintelligence - A critique of Nick Bostroms arguments
Why is everyone afraid of artificial intelligence? I'm more afraid of natural human stupidity (which is already infinite according to Einstein!). Superintelligence would just balance that out ;-)
But seriously I think Nick Bostrom has some flawed arguments in his talk:
1. He assumes that the work which is necessary to create a higher and higher intelligent being is linear (or even sublinear) with the IQ. But maybe this is not the case. It could be a lot harder than that. What if it were exponentially more difficult to increase the IQ of an artificial intelligence? Then even if we would manage to have artificial intelligence half as smart as a human in e.g. 30 years, then to get it as smart as humans would take us 60 years. To make a machine twice as smart as humans might well take 120 years, and so on. So having a superintelligence smarter than all humans (if we add their IQs together) together would take 30 * 10 billion years. The universe will end before that is going to happen.
2. He assumes a superintelligence would (despite of its superintelligence) stick to its stupid first optimization task (e.g. make all humans smile). So for him a superintelligence is just a very powerful optimization process, which can do everything to achieve its goal. However I believe that a superintelligence would understand that its optimization task is in fact stupid and instead do something more useful. Maybe it would simply refuse to work with stupid humans, would self-destruct, because it understands that the universe is finite and life is without sense. From human experience we know that genius and madness is often close to each other. So stabilizing (preventing it from becoming crazy and selfdestruct) a superintelligent being might be a very difficult task.
3. He also assumes that we could not put a superintelligence into a save box. But that would only be possible, if the superintelligence would be capable of violating our known physical laws of e.g. electromagnetism and so on. And that would mean, that a superintelligence is automatically super powerful. This seems unlikely. Even though we humans are smarter than animals, we are not super powerful. Physical laws are also valid for us: If we make a mistake, then some animal might simply eat us, despite their lower intelligence. Superhigh intelligence doesn't automatically mean invulnerability and freedom of error.
But lets assume, he is right:
- A superintelligence can be created in a reasonable amount of time (< 1000 years)
- A superintelligence can be stable and not imediately self-destruct.
- A superintelligence can violate our known physical laws, become super powerful and free of error.
So why did this then not already happen somewhere in the universe? From his arguments I believe such a superintelligence should already exist somewhere in the universe. But how would we call such a superintelligence? We simply would call it God!
So all his reasoning seems to boil down to the fact that he assumes God (or Gods) exists in the universe. Some religions might agree with him and can surely give him good advice on how to deal with that situation.