Yeah, that first question is the one I’m stuck at. From my point of view it just looks like the utility function that assigns value to hypothetical people has to have an error somewhere… but then again, that might be a byproduct of some problem in me rather than them. Possible, though it sure doesn’t seem likely from in here. Still, I do wonder what psychological fact I’d have to acquire that’d make sense of that perspective.
Anyway, I find that my tendency to assume that moral value is conserved through multiple iterations of a hypothetical reversible operation is a strong one. That is, if killing someone removes utility, then bringing them back to life should add roughly the same utility… otherwise I could in principle kill and resurrect the same person a million times in an instant, ending up with the world in the same state it started in but with massive utility gains or losses coming from no net state change, which seems… bizarre.
Well, trivially and possibly in violation of the spirit of the presented scenario, the effecting of those changes (such as switching a simulation of a person on and off at one million Hz) would itself consume energy, and pouring perfectly usable energy into a status quo outcome is likely to be undesirable.
The other question is “Is it justified to assume that making hypothetical people actual increases the net value of the world?” I don’t really know how to even approach this question, except maybe by asking what the expected results of assuming it are, which mostly doesn’t seem like what people who ask this question mean.
The RC can be justified (at least up to some point) by appealing to probable real-world consequences, such as the added capacity of producing Fun (for an individual) that arises from a civilization of a sufficient size. Specialization, gains from trade, added Social Fun opportunities from having lots of people, etc. Things such as mounting interstellar rescue operations also seem easier if the same people who put the ship together don’t need to design it. But all of this seems beside the point—assumptions added on top of the original thought experiment which, for whatever reason, treats the sum total of existing happiness as an intrinsic good as if the universe cares how many humans containing utilons it contains.
The only avenue of approach to the original thought experiment I was able to think of (that doesn’t include added assumptions) is trying to place the first round of burden of proof on the side claiming A) that hypothetical people don’t have value, and/or B) that the universe doesn’t care. Even if this side accepts the burden of proof, it seems like these things should be possible to prove, however hard those proofs may be to formalize.
But suffice to say I think this burden lies on the other party first, and that such a proof, should it ever be formulated, would not be likely to turn out to make much sense, especially if actually applied. If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic where resources go to those who’re able to devise the most powerful mathematical notations for expressing the very large numbers of hypothetical people they’ve got stashed in their astral Pokéballs.
If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic
Sure. OTOH, if we give hypothetical entities no weight at all, it seems to follow naturally that any project that won’t see benefits within a century or so is not worth doing, since no actual people will benefit from it, merely hypothetical people who haven’t yet been born.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future. Whether those entities exist right now or not—that is, whether they are actual or hypothetical—doesn’t really matter.
I’m realizing I made some overly sweeping generalizations about “hypothetical people” there. Whoops.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future.
This, I don’t disagree with. Optimizing for the people we expect to exist seems fine to me; it’s the normative leap from that to “we should produce more people” that throws me off.
The distinction between those two things gets a little tricky for me to hold on to when one of the things that significantly contributes to my expectations about the existence of someone is precisely how much I value them existing… or, more precisely, how much I expect my future self to value them if and when the opportunity to create them presents itself. E.g., if I really don’t want a child, my expectation of a child of mine existing in the future should be lower than if I really want one.
Conversely, if I expect an entity X to exist a year from now if things remain as they are now, and I judge that X would, if actual, make the world worse, it seems to follow that I should take steps to prevent X from becoming actual.
It seems moderately clear to me that, while I value more people rather than fewer all else being equal, that’s not a particularly important value; there are lots of things that I’ll trade it for.
Yeah, that first question is the one I’m stuck at. From my point of view it just looks like the utility function that assigns value to hypothetical people has to have an error somewhere… but then again, that might be a byproduct of some problem in me rather than them. Possible, though it sure doesn’t seem likely from in here. Still, I do wonder what psychological fact I’d have to acquire that’d make sense of that perspective.
Well, trivially and possibly in violation of the spirit of the presented scenario, the effecting of those changes (such as switching a simulation of a person on and off at one million Hz) would itself consume energy, and pouring perfectly usable energy into a status quo outcome is likely to be undesirable.
The RC can be justified (at least up to some point) by appealing to probable real-world consequences, such as the added capacity of producing Fun (for an individual) that arises from a civilization of a sufficient size. Specialization, gains from trade, added Social Fun opportunities from having lots of people, etc. Things such as mounting interstellar rescue operations also seem easier if the same people who put the ship together don’t need to design it. But all of this seems beside the point—assumptions added on top of the original thought experiment which, for whatever reason, treats the sum total of existing happiness as an intrinsic good as if the universe cares how many humans containing utilons it contains.
The only avenue of approach to the original thought experiment I was able to think of (that doesn’t include added assumptions) is trying to place the first round of burden of proof on the side claiming A) that hypothetical people don’t have value, and/or B) that the universe doesn’t care. Even if this side accepts the burden of proof, it seems like these things should be possible to prove, however hard those proofs may be to formalize.
But suffice to say I think this burden lies on the other party first, and that such a proof, should it ever be formulated, would not be likely to turn out to make much sense, especially if actually applied. If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic where resources go to those who’re able to devise the most powerful mathematical notations for expressing the very large numbers of hypothetical people they’ve got stashed in their astral Pokéballs.
Sure. OTOH, if we give hypothetical entities no weight at all, it seems to follow naturally that any project that won’t see benefits within a century or so is not worth doing, since no actual people will benefit from it, merely hypothetical people who haven’t yet been born.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future. Whether those entities exist right now or not—that is, whether they are actual or hypothetical—doesn’t really matter.
I’m realizing I made some overly sweeping generalizations about “hypothetical people” there. Whoops.
This, I don’t disagree with. Optimizing for the people we expect to exist seems fine to me; it’s the normative leap from that to “we should produce more people” that throws me off.
The distinction between those two things gets a little tricky for me to hold on to when one of the things that significantly contributes to my expectations about the existence of someone is precisely how much I value them existing… or, more precisely, how much I expect my future self to value them if and when the opportunity to create them presents itself. E.g., if I really don’t want a child, my expectation of a child of mine existing in the future should be lower than if I really want one.
Conversely, if I expect an entity X to exist a year from now if things remain as they are now, and I judge that X would, if actual, make the world worse, it seems to follow that I should take steps to prevent X from becoming actual.
It seems moderately clear to me that, while I value more people rather than fewer all else being equal, that’s not a particularly important value; there are lots of things that I’ll trade it for.