Here’s a slightly more general way of phrasing it: We find ourselves in an extremely leveraged position, making decisions which may influence the trajectory of the entire universe (more precisely our lightcone contains a gigantic amount of resources). There are lots of reasons to care about what happens to universes like ours, either because you live in one or because you can acausally trade with one that you think probably exists. “Paperclip maximizers” is a very small subset of the parties that have a reason to be interested in trying to figure out what happens to universes like ours. I’d wager there are a lot more simulations of minds in highly leveraged positions than there are minds which actually do have a lot of leverage. Being one of the people working on AI/AI safety adds several OOMs of coincidence over being a human in this time period in general, but being a super early mind in this universe at all is still hugely leveraged. Since highly leveraged minds are a lot more likely to be created in simulations than they are to actually have a lot of leverage, you are probably in a simulation. That said, for most utility functions it really shouldn’t matter. If you’re simulated, it’s because your decisions are correlated in some important way with decisions that actually do influence huge amounts of resources, otherwise no one would bother running the simulation. You might as well just act how you would want yourself to act conditional on influencing huge amounts of resources. If your utility is a lot more highly discounted, and you just care about your own short term experiences, then you can just enjoy yourself, simulated or not (although maybe this reduces your measure a bit because no one will bother simulating you if you aren’t going to make influential decisions).
This is useful RE the leverage, except it skips the why. “Lots of reasons” isn’t intuitive for me; can you give some more? Simulating people is a lot of trouble and quite unethical if the suffering is real. So there needs to be a pretty strong and possibly amoral reason. I guess your answer is acausal trade? I’ve never found that argument convincing but maybe I’m missing something.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI: a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering. b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure. c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.
Here’s a slightly more general way of phrasing it:
We find ourselves in an extremely leveraged position, making decisions which may influence the trajectory of the entire universe (more precisely our lightcone contains a gigantic amount of resources). There are lots of reasons to care about what happens to universes like ours, either because you live in one or because you can acausally trade with one that you think probably exists. “Paperclip maximizers” is a very small subset of the parties that have a reason to be interested in trying to figure out what happens to universes like ours. I’d wager there are a lot more simulations of minds in highly leveraged positions than there are minds which actually do have a lot of leverage. Being one of the people working on AI/AI safety adds several OOMs of coincidence over being a human in this time period in general, but being a super early mind in this universe at all is still hugely leveraged. Since highly leveraged minds are a lot more likely to be created in simulations than they are to actually have a lot of leverage, you are probably in a simulation. That said, for most utility functions it really shouldn’t matter. If you’re simulated, it’s because your decisions are correlated in some important way with decisions that actually do influence huge amounts of resources, otherwise no one would bother running the simulation. You might as well just act how you would want yourself to act conditional on influencing huge amounts of resources. If your utility is a lot more highly discounted, and you just care about your own short term experiences, then you can just enjoy yourself, simulated or not (although maybe this reduces your measure a bit because no one will bother simulating you if you aren’t going to make influential decisions).
This is useful RE the leverage, except it skips the why. “Lots of reasons” isn’t intuitive for me; can you give some more? Simulating people is a lot of trouble and quite unethical if the suffering is real. So there needs to be a pretty strong and possibly amoral reason. I guess your answer is acausal trade? I’ve never found that argument convincing but maybe I’m missing something.
For an unaligned AI, it is either simulating alternative histories (which is the focus of this post) or creating material for blackmail.
For an aligned AI:
a) It may follow a different moral theory than our version of utilitarianism, in which existence is generally considered good despite moments of suffering.
b) It might aim to resurrect the dead by simulating the entirety of human history exactly, ensuring that any brief human suffering is compensated by future eternal pleasure.
c) It could attempt to cure past suffering by creating numerous simulations where any intense suffering ends quickly, so by indexical uncertainty, any person would find themselves in such a simulation.