Decline to answer, because of the conjunction in options (I have to agree with the reasoning, not just the prediction), and because it’s an impossible situation that I can’t figure out what ELSE is different about the world which would change my behavior.
Don’t lean too heavily on intuitions that are far-out-of-domain. It’s impossible to imagine such a decision, because there are TONS of details that are ignored in far-mode which would probably matter (such as: how do we know it’s high-IQ, and what additional evidence do we have of its alignment/humanity?).
Note also this is VERY different to the question of “do you oppose research and genetic meddling that is likely to lead to a super-intelligent baby”? Most of our intuitions give different answers for killing an existing being than for preventing it from existing in the first place.
I think it’s possible to imagine and reason about this case, and the conclusion—if we follow the AI Safety playbook—would be to kill the baby.
To me, that seems like a stronger claim that many people in the community would agree on, including Eliezer. And it has implications for how we think about AI Safety.
The result, however, is somewhat expected and disappointing: downvotes, refusal to think about it, and banning.
Decline to answer, because of the conjunction in options (I have to agree with the reasoning, not just the prediction), and because it’s an impossible situation that I can’t figure out what ELSE is different about the world which would change my behavior.
Don’t lean too heavily on intuitions that are far-out-of-domain. It’s impossible to imagine such a decision, because there are TONS of details that are ignored in far-mode which would probably matter (such as: how do we know it’s high-IQ, and what additional evidence do we have of its alignment/humanity?).
Note also this is VERY different to the question of “do you oppose research and genetic meddling that is likely to lead to a super-intelligent baby”? Most of our intuitions give different answers for killing an existing being than for preventing it from existing in the first place.
I think it’s possible to imagine and reason about this case, and the conclusion—if we follow the AI Safety playbook—would be to kill the baby.
To me, that seems like a stronger claim that many people in the community would agree on, including Eliezer. And it has implications for how we think about AI Safety.
The result, however, is somewhat expected and disappointing: downvotes, refusal to think about it, and banning.