I’ve had exactly the same thought before but never got around to writing it up. Thanks for doing it so I don’t have to :-)
There are only so many possible human shaped computations that are valuable to me
I would surmise that value-space is not so much “finite in size” but rather that it fades off into the distance in such a way that it has a finite sum over the infinite space. This is because other minds are valuable to me insofar as they can do superrationality/FDT/etc. with me. In fact, this is the same fading-out function as in the “perturb the simulation” scenario; i.e.:
VA(AB) := The value that A places on a world where A and B both exist
VA(A¯B) := The value that A places on a world where A exists but B doesn’t
VA(¯AB) := The value that A places on a world where A doesn’t exist but B does
Claim: VA(AB)−VA(A¯B)=VA(A¯B)−VA(¯AB)
However, the main problem with this perspective is what to do with quantum many-worlds. Does this imply that “quantum suicide” is rational, e.g. that you should buy a lottery ticket and set up a machine that kills you if you don’t win? This is bullet I don’t want to bite (so to speak...)
One of the things I wanted to get into much more detail with this post is anthropics. I think something like average utilitarianism is something like correct, and I don’t feel comfortable with being uncaring about death, even though I think quantum immortality is basically true, in the sense that all the worlds you experience are ones in which you still exist in that world, and there are, in these physics as far as I can tell, always worlds in which you could keep existing, albeit sometimes through weird edge case quantum fluctuations in some cases .
I don’t have strong takes on your formula, but I’m not confident it’s true for me.
I’ve had exactly the same thought before but never got around to writing it up. Thanks for doing it so I don’t have to :-)
I would surmise that value-space is not so much “finite in size” but rather that it fades off into the distance in such a way that it has a finite sum over the infinite space. This is because other minds are valuable to me insofar as they can do superrationality/FDT/etc. with me. In fact, this is the same fading-out function as in the “perturb the simulation” scenario; i.e.:
VA(AB) := The value that A places on a world where A and B both exist
VA(A¯B) := The value that A places on a world where A exists but B doesn’t
VA(¯AB) := The value that A places on a world where A doesn’t exist but B does
Claim: VA(AB)−VA(A¯B)=VA(A¯B)−VA(¯AB)
However, the main problem with this perspective is what to do with quantum many-worlds. Does this imply that “quantum suicide” is rational, e.g. that you should buy a lottery ticket and set up a machine that kills you if you don’t win? This is bullet I don’t want to bite (so to speak...)
Either I’m misunderstanding what you wrote, or you didn’t mean to write what you did.
Suppose A is a human and B is a shrimp.
The value of adding a shrimp to a world where A exists is small.
The value of replacing the shrimp with A is large.
One of the things I wanted to get into much more detail with this post is anthropics. I think something like average utilitarianism is something like correct, and I don’t feel comfortable with being uncaring about death, even though I think quantum immortality is basically true, in the sense that all the worlds you experience are ones in which you still exist in that world, and there are, in these physics as far as I can tell, always worlds in which you could keep existing, albeit sometimes through weird edge case quantum fluctuations in some cases .
I don’t have strong takes on your formula, but I’m not confident it’s true for me.