Refusing to decide is itself a decision, and usually a bad one.
This is a good heuristics for a world without Friendly superintelligences, but how far can we extrapolate it?
This is a good heuristics for a world without Friendly superintelligences, but how far can we extrapolate it?