That in one case there actually are two agents, and in the other there aren’t.
I’m not sure how much difference this really makes to whether it’s helpful to call the Newcomb scenario an anti-coordination game. That’s partly because I’m not sure whether calling it that does anything much to help or hinder decision-making in any case :-).
There are certainly other situations in which a parallel difference seems really important.
Suppose you want to figure out whether I will enjoy having my legs eaten off by piranhas. (Spoiler: No.) You can do this in various ways. One way is to build a perfectly faithful model of my brain, body and environment, simulate the process really accurately, and observes the screams and writhings and so forth. Another is to think “hmm, that would involve having the flesh ripped from his bones, and that sort of thing is usually excruciatingly painful, and most people mostly don’t like excruciating pain”. I would feel very differently about these two decision processes that you might employ.
Ah yes, if copies suffer during the decision process, that is a relevant distinction. I will avoid dunking your copies into piranhas from this point on! ^_^
My main point, though, is that the decisions of sensible decision theories will be similar on the two problems—we expect defectors to two-box.
This is interesting because, by Rice theorem, it’s impossible to have a general procedure to do semantic inspection, even if “general” is considered to be a simple recursively enumerable set (on the opposite side, structural inspection, e.g. how many states a machine has, is trivial). This implies that “pain” is a structural property of the human brain instead of a semantic property. I wonder: is there a property of my mind that is inaccessible to inspection by a super-agent if not by emulation? Or are all my thoughts accessible because each is reflected in the structural changes of my brain chemical architecture?
What is the relevant difference between the two situations?
That in one case there actually are two agents, and in the other there aren’t.
I’m not sure how much difference this really makes to whether it’s helpful to call the Newcomb scenario an anti-coordination game. That’s partly because I’m not sure whether calling it that does anything much to help or hinder decision-making in any case :-).
There are certainly other situations in which a parallel difference seems really important.
Suppose you want to figure out whether I will enjoy having my legs eaten off by piranhas. (Spoiler: No.) You can do this in various ways. One way is to build a perfectly faithful model of my brain, body and environment, simulate the process really accurately, and observes the screams and writhings and so forth. Another is to think “hmm, that would involve having the flesh ripped from his bones, and that sort of thing is usually excruciatingly painful, and most people mostly don’t like excruciating pain”. I would feel very differently about these two decision processes that you might employ.
Ah yes, if copies suffer during the decision process, that is a relevant distinction. I will avoid dunking your copies into piranhas from this point on! ^_^
My main point, though, is that the decisions of sensible decision theories will be similar on the two problems—we expect defectors to two-box.
This is interesting because, by Rice theorem, it’s impossible to have a general procedure to do semantic inspection, even if “general” is considered to be a simple recursively enumerable set (on the opposite side, structural inspection, e.g. how many states a machine has, is trivial).
This implies that “pain” is a structural property of the human brain instead of a semantic property. I wonder: is there a property of my mind that is inaccessible to inspection by a super-agent if not by emulation? Or are all my thoughts accessible because each is reflected in the structural changes of my brain chemical architecture?