Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won’t do this.
There’s nothing “shortsighted” about it. So what if there are billions of humans and each has many goals? The superintelligence does not care. So what if, once the superintelligence tiles the universe with the thing of its choice, there’s nothing left to be achieved? It does not care. Disvaluing stagnation, caring about others’ goals being achieved, etcetera are not things you can expect from an arbitrary mind.
A genuine paperclip maximizer really does want the maximum number of possible paperclips the universe can sustain to exist and continue to exist forever at the expense of anything else. If it’s smart enough that it can get whatever it wants without having to compromise with other agents, that’s what it will get, and it’s not being stupid in so doing. It’s very effective. It’s just unFriendly.
If it’s smart enough that it can get whatever it wants without having to compromise with other agents
Hm. Are you thinking in terms of an ensemble or a single universe? In an ensemble it seems that any agent would likely be predictable by a more powerful agent, and thus would probably have to trade with that agent to best achieve it’s goals.
The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.
Whether AI cares about “long-term” effects is not related to complexity of its goals. A paperclip maximizer variant might want to maximize the maximum time that a single paperclip continues to exist, pushing all available resources to get as close to the physical limitations as it can manage.
There’s nothing “shortsighted” about it. So what if there are billions of humans and each has many goals? The superintelligence does not care. So what if, once the superintelligence tiles the universe with the thing of its choice, there’s nothing left to be achieved? It does not care. Disvaluing stagnation, caring about others’ goals being achieved, etcetera are not things you can expect from an arbitrary mind.
A genuine paperclip maximizer really does want the maximum number of possible paperclips the universe can sustain to exist and continue to exist forever at the expense of anything else. If it’s smart enough that it can get whatever it wants without having to compromise with other agents, that’s what it will get, and it’s not being stupid in so doing. It’s very effective. It’s just unFriendly.
Hm. Are you thinking in terms of an ensemble or a single universe? In an ensemble it seems that any agent would likely be predictable by a more powerful agent, and thus would probably have to trade with that agent to best achieve it’s goals.
The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.
Whether AI cares about “long-term” effects is not related to complexity of its goals. A paperclip maximizer variant might want to maximize the maximum time that a single paperclip continues to exist, pushing all available resources to get as close to the physical limitations as it can manage.