I think it’s possible to imagine and reason about this case, and the conclusion—if we follow the AI Safety playbook—would be to kill the baby.
To me, that seems like a stronger claim that many people in the community would agree on, including Eliezer. And it has implications for how we think about AI Safety.
The result, however, is somewhat expected and disappointing: downvotes, refusal to think about it, and banning.
I think it’s possible to imagine and reason about this case, and the conclusion—if we follow the AI Safety playbook—would be to kill the baby.
To me, that seems like a stronger claim that many people in the community would agree on, including Eliezer. And it has implications for how we think about AI Safety.
The result, however, is somewhat expected and disappointing: downvotes, refusal to think about it, and banning.