Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn’t capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn’t [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn’t plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can’t be a simulation. What do you do?
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it’s a simulation, ie. that whatever question it asks itself, it finds an answer.
It can’t actually check for consistency, remember, it’s a simulation, if it would find an inconsistency “change detail [removing inconsistency], run” or “insert thought ‘yep, that’s all consistent’; run”
If she’s capable of case 1, she’s capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn’t capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn’t [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn’t plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can’t be a simulation. What do you do?
I know she’s lying.
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it’s a simulation, ie. that whatever question it asks itself, it finds an answer. It can’t actually check for consistency, remember, it’s a simulation, if it would find an inconsistency “change detail [removing inconsistency], run” or “insert thought ‘yep, that’s all consistent’; run”
If she’s capable of case 1, she’s capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.