For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI: a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering. b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure. c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI:
a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering.
b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure.
c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.