This is useful RE the leverage, except it skips the why. “Lots of reasons” isn’t intuitive for me; can you give some more? Simulating people is a lot of trouble and quite unethical if the suffering is real. So there needs to be a pretty strong and possibly amoral reason. I guess your answer is acausal trade? I’ve never found that argument convincing but maybe I’m missing something.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI: a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering. b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure. c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.
This is useful RE the leverage, except it skips the why. “Lots of reasons” isn’t intuitive for me; can you give some more? Simulating people is a lot of trouble and quite unethical if the suffering is real. So there needs to be a pretty strong and possibly amoral reason. I guess your answer is acausal trade? I’ve never found that argument convincing but maybe I’m missing something.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI:
a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering.
b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure.
c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.