The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.
Whether AI cares about “long-term” effects is not related to complexity of its goals. A paperclip maximizer variant might want to maximize the maximum time that a single paperclip continues to exist, pushing all available resources to get as close to the physical limitations as it can manage.
The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.
Whether AI cares about “long-term” effects is not related to complexity of its goals. A paperclip maximizer variant might want to maximize the maximum time that a single paperclip continues to exist, pushing all available resources to get as close to the physical limitations as it can manage.