I’m not making a general argument. SIAI makes a specific argument, that humans of present-day intelligence will inevitably construct an AI, and this AI will almost inevitably cause infinite negative utility by our values. If you believe that argument, then increasing intelligence decreases expected utility, QED.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
“Intelligence of what?” is an important question that you are eliding. Increasing AI intelligence when the AI doesn’t share our values (i.e. uFAI) decreases utility among those who share our values. That doesn’t say anything about increasing intelligence of entities that do share our values.
I’m not making a general argument. SIAI makes a specific argument, that humans of present-day intelligence will inevitably construct an AI, and this AI will almost inevitably cause infinite negative utility by our values. If you believe that argument, then increasing intelligence decreases expected utility, QED.
Not QED—you just tripped over Simpson’s paradox. Higher intelligence could yield a higher chance of a positive AI outcome rather than a negative AI outcome.
This is an interesting point. But I think that a small lowering of human intelligence, say shifting the entire curve down by 20 points, would prevent us from ever developing AI. So at a point epsilon from where human intelligence is at now, an increase increases the risk from AI.
Hum. Well, it depends on our starting point, right? We’re at a point where it seems unlikely we’re too dumb to make any sort of AI at all, so we had better be on top of our game.
“Intelligence of what?” is an important question that you are eliding. Increasing AI intelligence when the AI doesn’t share our values (i.e. uFAI) decreases utility among those who share our values. That doesn’t say anything about increasing intelligence of entities that do share our values.