Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as “Guy Srinivasan”. These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.
Each of us agrees that: “So of course I cooperate with them on one-shot cooperation problems like a prisoner’s dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response.”
As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We’re pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we’d have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they’d have the ability and act on it. Obviously this leads to outcomes we don’t prefer, so all other things being equal, we will avoid taking actions which lead to other cross-sections preferring that cross-sections be inside an MBLS.
What’s even worse is that if they prefer cross-sections to be inside an MBLS, they can probably make other cross-sections prefer it, too! Which wouldn’t be a problem if we wanted cross-sections to prefer to be inside an MBLS more than we wanted cross-sections to not be inside an MBLS, but that’s just not the way we are.
We’ll cooperate with those other cross-sections, but not to the exclusion of our preferences. By lumping us all together as the 4D branching tree-blob Guy Srinivasan, you do us all (and most importantly members of this coalition) a disservice.
Sincerely,
A Coalition of Correlated 3D Cross-Sections of Guy Srinivasan
Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,
We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you’ll agree that we had no choice but to decide as we did.
Regrettably, Omega Corporation
Attachment
Dear Omega Corporation,
We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.
Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?
Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan
Quite. That Omega Corporation is closer to Friendly than is Clippy, but if it misses, it misses, and future me is tiled with things I don’t want (even if future me does) rather than things I want.
If I want MBLSing but don’t know it due to computational problems now, then it’s fine. I think that’s coherent but defining computational without allowing “my” current “preferences” to change… okay, since I don’t know how to do that, I have nothing but intuition as a reason to think it’s coherent.
I think this is a good point, but I have a small nit to pick:
So of course I cooperate with them on one-shot cooperation problems like a prisoner’s dilemma!
There cannot be a prisoner’s dilemma because your future self has no possible way of screwing your past self.
By way of example, if I were to go out today and spend all of my money on the proverbial hookers and blow, I would be having a good time at the expense of my future self, but there is no way my future self could get back at me.
So it’s not so much a matter of cooperation as a matter of pure unmitigated altruism. I’ve thought about this issue and it seems to me that evolution has provided people (well, most people) with the feeling (possibly an illusion) that our future selves matter. That these “3D agents” are all essentially the same person.
My past self had preferences about what the future looks like, and by refusing to respect them I can defect.
Edit: It’s pretty hard to create true short-term prisoner’s dilemma situations, since usually neither party gets to see the other’s choice before choosing.
My past self had preferences about what the future looks like, and by refusing to respect them I can defect.
It seems to me your past self is long gone and doesn’t care anymore. Except insofar as your past self feels a sense of identity with your future self. Which is exactly my point.
Your past self can easily cause physical or financial harm to your future self. But the reverse isn’t true. Your future self can harm your past self only if one postulates that your past self actually feels a sense of identity with your future self.
I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he’s looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self’s] preference ordering than a different world I could have caused to exist.
My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.
I think I see your point, but let me ask you this: Do you think that today in 2010 it’s possible to harm Isaac Newton? What would you do right now to harm Isaac Newton and how exactly would that harm manifest itself?
Very probably. I don’t know what I’d do because I don’t know what his preferences were. Although… a quick Google search reveals this quote:
To me there has never been a higher source of earthly honor or distinction than that connected with advances in science.
I find it likely, then, that he preferred us not to obstruct advances in science in 2010 than for us to obstruct advances in science in 2010. I don’t know how much more, maybe it’s attenuated a lot compared to the strength of lots of his other preferences.
The harm would manifest itself as a higher measure of 2010 worlds in which science is obstructed, which is something (I think) Newton opposed.
(Or, if you like, my time-travel-causing e.g. 1700 to be the sort of world which deterministically produces more science-obstructed-2010s than the 1700 I could have caused.)
A little bit of both, I suppose. One needs to define “harm” in a way which is true to the spirit of the prisoner’s dilemma. The underlying question is whether one can set up a prisoner’s dilemma between a past version of the self and a future version of the self.
Dear Omega Corporation,
Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as “Guy Srinivasan”. These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.
Each of us agrees that: “So of course I cooperate with them on one-shot cooperation problems like a prisoner’s dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response.”
As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We’re pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we’d have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they’d have the ability and act on it. Obviously this leads to outcomes we don’t prefer, so all other things being equal, we will avoid taking actions which lead to other cross-sections preferring that cross-sections be inside an MBLS.
What’s even worse is that if they prefer cross-sections to be inside an MBLS, they can probably make other cross-sections prefer it, too! Which wouldn’t be a problem if we wanted cross-sections to prefer to be inside an MBLS more than we wanted cross-sections to not be inside an MBLS, but that’s just not the way we are.
We’ll cooperate with those other cross-sections, but not to the exclusion of our preferences. By lumping us all together as the 4D branching tree-blob Guy Srinivasan, you do us all (and most importantly members of this coalition) a disservice.
Sincerely, A Coalition of Correlated 3D Cross-Sections of Guy Srinivasan
Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,
We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you’ll agree that we had no choice but to decide as we did.
Regrettably, Omega Corporation
Attachment
Dear Omega Corporation,
We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.
Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?
Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan
Quite. That Omega Corporation is closer to Friendly than is Clippy, but if it misses, it misses, and future me is tiled with things I don’t want (even if future me does) rather than things I want.
If I want MBLSing but don’t know it due to computational problems now, then it’s fine. I think that’s coherent but defining computational without allowing “my” current “preferences” to change… okay, since I don’t know how to do that, I have nothing but intuition as a reason to think it’s coherent.
I think this is a good point, but I have a small nit to pick:
There cannot be a prisoner’s dilemma because your future self has no possible way of screwing your past self.
By way of example, if I were to go out today and spend all of my money on the proverbial hookers and blow, I would be having a good time at the expense of my future self, but there is no way my future self could get back at me.
So it’s not so much a matter of cooperation as a matter of pure unmitigated altruism. I’ve thought about this issue and it seems to me that evolution has provided people (well, most people) with the feeling (possibly an illusion) that our future selves matter. That these “3D agents” are all essentially the same person.
My past self had preferences about what the future looks like, and by refusing to respect them I can defect.
Edit: It’s pretty hard to create true short-term prisoner’s dilemma situations, since usually neither party gets to see the other’s choice before choosing.
It seems to me your past self is long gone and doesn’t care anymore. Except insofar as your past self feels a sense of identity with your future self. Which is exactly my point.
Your past self can easily cause physical or financial harm to your future self. But the reverse isn’t true. Your future self can harm your past self only if one postulates that your past self actually feels a sense of identity with your future self.
I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he’s looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self’s] preference ordering than a different world I could have caused to exist.
My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.
I think I see your point, but let me ask you this: Do you think that today in 2010 it’s possible to harm Isaac Newton? What would you do right now to harm Isaac Newton and how exactly would that harm manifest itself?
Very probably. I don’t know what I’d do because I don’t know what his preferences were. Although… a quick Google search reveals this quote:
I find it likely, then, that he preferred us not to obstruct advances in science in 2010 than for us to obstruct advances in science in 2010. I don’t know how much more, maybe it’s attenuated a lot compared to the strength of lots of his other preferences.
The harm would manifest itself as a higher measure of 2010 worlds in which science is obstructed, which is something (I think) Newton opposed.
(Or, if you like, my time-travel-causing e.g. 1700 to be the sort of world which deterministically produces more science-obstructed-2010s than the 1700 I could have caused.)
Ok, so you are saying that one can harm Isaac Newton today by going out and obstructing the advance of science?
Yep. I’ll bite that bullet until shown a good reason I should not.
I suppose that’s the nub of the disagreement. I don’t believe it’s possible to do anything in 2010 to harm Isaac Newton.
Is this a disagreement about metaphysics, or about how best to define the word ‘harm’?
A little bit of both, I suppose. One needs to define “harm” in a way which is true to the spirit of the prisoner’s dilemma. The underlying question is whether one can set up a prisoner’s dilemma between a past version of the self and a future version of the self.