In any case 99% shouldn’t require imagining the actions of a reflectively indistinguishable from you copy of you.
You expect the copy to be able to tell it’s a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone’s reaction to possible courses of action, do you simulate them as being aware of being a simulation?
None of my internal simulations have ever been aware of being simulations.
There are four possibilities:
The copy never wonders whether it’s a copy.
The copy wonders about being a copy and concludes that it is.
The copy concludes that it cannot be a copy.
The copy is from it’s point of view reflectively indistinguishable from you.
Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it’s a copy, which is logically inconsistent.
Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn’t capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn’t [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn’t plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can’t be a simulation. What do you do?
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it’s a simulation, ie. that whatever question it asks itself, it finds an answer.
It can’t actually check for consistency, remember, it’s a simulation, if it would find an inconsistency “change detail [removing inconsistency], run” or “insert thought ‘yep, that’s all consistent’; run”
If she’s capable of case 1, she’s capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.
There are four possibilities:
The copy never wonders whether it’s a copy.
The copy wonders about being a copy and concludes that it is.
The copy concludes that it cannot be a copy.
The copy is from it’s point of view reflectively indistinguishable from you.
Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it’s a copy, which is logically inconsistent.
Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn’t capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn’t [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn’t plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can’t be a simulation. What do you do?
I know she’s lying.
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it’s a simulation, ie. that whatever question it asks itself, it finds an answer. It can’t actually check for consistency, remember, it’s a simulation, if it would find an inconsistency “change detail [removing inconsistency], run” or “insert thought ‘yep, that’s all consistent’; run”
If she’s capable of case 1, she’s capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.