Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
As a human, I find solutions that destroy all humans to be less than ideal. I’d prefer a solution that curbs our “destructive tendencies”, instead.
But is there a rational argument for that? Because on a gut level, I just don’t like humans all that much.
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?