One perhaps useful analogy for super-intelligence going wrong is corporations.
We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people’s lives, damaging the environment, corrupting the political process.
By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.
It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.
Are you saying that no complex phenomenon is going to be able to provide only benefits and nothing but benefits, or are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?
One perhaps useful analogy for super-intelligence going wrong is corporations.
We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people’s lives, damaging the environment, corrupting the political process.
By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.
It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.
Are you saying that no complex phenomenon is going to be able to provide only benefits and nothing but benefits, or are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?
No. Maybe it is possible. I am suggesting that it is not automatic that our creations serve our interests.
No. Saying something has harmful effects is not the same as saying that it is overall bad.
I am illustrating ways in which our creations can fail to serve our interests.
They do not have to be onmiscient to be smarter in some respects than human individuals.
It is hard to control their actions and to make sure they do serve our interests.
These effects can be subtle and difficult to understand.
But are corporations existiential threats?