Belatedly, is this a fair summary of your critique?
When someone thinks about another person (e.g. to predict whether they’ll submit to blackmail), the act of thinking about the other person creates a sort of ‘mental simulation’ that has detailed concious experiences in its own right. So you never really know whether you’re a flesh-and-blood person or a ‘mental simulation’ based on a flesh-and-blood person.
Now, suppose you seem to find yourself in a situation where you’ve been blackmailed. In this context, it’s reasonable to wonder whether you’re actually a flesh-and-blood person who’s been blackmailed—or merely a ‘mental simulation’ that exists in the mind of a potential blackmailer. If you’re a mental simulation, and you care about the flesh-and-blood person you’re based on, then you have reason to resist blackmail. The reason is that the decision you take as a simulation will determine the blackmailer’s prediction about how the flesh-and-blood person will behave. If you resist blackmail, then the blackmailer will predict the flesh-and-blood person will refuse blackmail and therefore decide not to blackmail them.
If this is roughly in the right ballpark, then I would have a couple responses:
I disagree that the act of thinking about a person will tend to create a mental simulation that has detailed concious experiences in its own right. This seems like a surprising position that goes against the grain of conventional neuroscience and views on the philosophy of conciousness. As a simple illustrative case, suppose that Omega makes a prediction about Person A purely on the basis of their body language. Surely thinking “This guys looks really nervous, he’s probably worried he’ll be seen as the sort of guy who’ll submit to blackmail—because he is” doesn’t require bringing a whole new conciousness into existance.
Suppose that when a blackmailer predicts someone’s behavior, they do actually create a concious mental simulation. Suppose you don’t know whether you’re this kind of simulation or the associated flesh-and-blood person, but you care about what happens to the flesh-and-blood person in either case. Then, depending on certain parameter values, CDT does actually say you should resist blackmail. This is because there is some chance that you will cause the flesh-and-blood person to avoid being blackmailed. So CDT gives the response you want in this case.
Overall, I don’t think this line of argument really damages CDT. It seems to be based on a claim about conciousness that I think probably wrong. But even if the claim is right, all this implies is that CDT recommends a different action than one would otherwise have thought.
(If my summary is roughly in the right ballpark, then I also think it’s totally reasonable for academic decision theorists to read the FDT paper to fail to know that a non-mainstream neuroscience/philosophy-of-conciousness view is being assumed and provides the main justification for FDT. The paper really doesn’t directly say anything about this. It seems wrong, then, to me to suggest that Schwarz only disagrees because he lacks the ability to see his own assumptions.)
[[EDIT: Oops, rereading your comment, seems like the summary is probably not fair. I didn’t process this bit:
Yes, yes, if Omega used some method other than a simulation to make his prediction, the hypothetical you wouldn’t have existed and wouldn’t have had a perspective—but hey, that doesn’t stop me from writing from their perspective, right? After all, real people write from the perspectives of unreal people all the time; that’s just called writing fiction.
But now, reading the rest of the comment in light of this point, I don’t think this reduces my qualms. The suggestion seems to be that, when seem to find yourself in the box room, you should in some cases be uncertain about whether or not you exist at all. And in these cases you should one box, because, if it turns out that you don’t exist, then your decision to one box will (in some sense) cause a corresponding person who does exist to get more money. You also don’t personally get less money by one boxing, because you don’t get any money either way, because you don’t exist.
Naively, this line of thought seems sketchy. You can have uncertainty about the substrate your mind is being run on or about the features of the external world—e.g. you can be unsure whether or not you’re a simulation—but there doesn’t seem to be room for uncertainty about whether or not you exist. “Cogito, ergo sum” and all that.
There is presumably some set of metaphysical/epistemological positions under which this line of reasoning makes sense, but, again, the paper really doesn’t make any of these positions explicit or argue for them directly. I mainly think it’s premature to explain the paper’s faillure to persuade philosophers in terms of their rigidity or inability to question assumptions.]]
Belatedly, is this a fair summary of your critique?
When someone thinks about another person (e.g. to predict whether they’ll submit to blackmail), the act of thinking about the other person creates a sort of ‘mental simulation’ that has detailed concious experiences in its own right. So you never really know whether you’re a flesh-and-blood person or a ‘mental simulation’ based on a flesh-and-blood person.
Now, suppose you seem to find yourself in a situation where you’ve been blackmailed. In this context, it’s reasonable to wonder whether you’re actually a flesh-and-blood person who’s been blackmailed—or merely a ‘mental simulation’ that exists in the mind of a potential blackmailer. If you’re a mental simulation, and you care about the flesh-and-blood person you’re based on, then you have reason to resist blackmail. The reason is that the decision you take as a simulation will determine the blackmailer’s prediction about how the flesh-and-blood person will behave. If you resist blackmail, then the blackmailer will predict the flesh-and-blood person will refuse blackmail and therefore decide not to blackmail them.
If this is roughly in the right ballpark, then I would have a couple responses:
I disagree that the act of thinking about a person will tend to create a mental simulation that has detailed concious experiences in its own right. This seems like a surprising position that goes against the grain of conventional neuroscience and views on the philosophy of conciousness. As a simple illustrative case, suppose that Omega makes a prediction about Person A purely on the basis of their body language. Surely thinking “This guys looks really nervous, he’s probably worried he’ll be seen as the sort of guy who’ll submit to blackmail—because he is” doesn’t require bringing a whole new conciousness into existance.
Suppose that when a blackmailer predicts someone’s behavior, they do actually create a concious mental simulation. Suppose you don’t know whether you’re this kind of simulation or the associated flesh-and-blood person, but you care about what happens to the flesh-and-blood person in either case. Then, depending on certain parameter values, CDT does actually say you should resist blackmail. This is because there is some chance that you will cause the flesh-and-blood person to avoid being blackmailed. So CDT gives the response you want in this case.
Overall, I don’t think this line of argument really damages CDT. It seems to be based on a claim about conciousness that I think probably wrong. But even if the claim is right, all this implies is that CDT recommends a different action than one would otherwise have thought.
(If my summary is roughly in the right ballpark, then I also think it’s totally reasonable for academic decision theorists to read the FDT paper to fail to know that a non-mainstream neuroscience/philosophy-of-conciousness view is being assumed and provides the main justification for FDT. The paper really doesn’t directly say anything about this. It seems wrong, then, to me to suggest that Schwarz only disagrees because he lacks the ability to see his own assumptions.)
[[EDIT: Oops, rereading your comment, seems like the summary is probably not fair. I didn’t process this bit:
But now, reading the rest of the comment in light of this point, I don’t think this reduces my qualms. The suggestion seems to be that, when seem to find yourself in the box room, you should in some cases be uncertain about whether or not you exist at all. And in these cases you should one box, because, if it turns out that you don’t exist, then your decision to one box will (in some sense) cause a corresponding person who does exist to get more money. You also don’t personally get less money by one boxing, because you don’t get any money either way, because you don’t exist.
Naively, this line of thought seems sketchy. You can have uncertainty about the substrate your mind is being run on or about the features of the external world—e.g. you can be unsure whether or not you’re a simulation—but there doesn’t seem to be room for uncertainty about whether or not you exist. “Cogito, ergo sum” and all that.
There is presumably some set of metaphysical/epistemological positions under which this line of reasoning makes sense, but, again, the paper really doesn’t make any of these positions explicit or argue for them directly. I mainly think it’s premature to explain the paper’s faillure to persuade philosophers in terms of their rigidity or inability to question assumptions.]]