This seems similar to one-boxing in response to an empty box in a variant of Transparent Newcomb, where you are demonstrating how you behave in a situation that won’t get any reality fluid. As you are demonstrating this, you already know that the current situation isn’t actual, the effect of your behavior is purely indirect.
So similarly with an oracle, you figure out how it behaves in the counterfactual where you ask your question, but in actuality you never ask it this question, and so within the counterfactual it knows that the situation where it gets asked the question isn’t actual. Whatever you need to do to figure out its counterfactual behavior might of course affect its algorithmic prior, but it doesn’t need to take the form of asking it the question.
This seems similar to one-boxing in response to an empty box in a variant of Transparent Newcomb, where you are demonstrating how you behave in a situation that won’t get any reality fluid. As you are demonstrating this, you already know that the current situation isn’t actual, the effect of your behavior is purely indirect.
So similarly with an oracle, you figure out how it behaves in the counterfactual where you ask your question, but in actuality you never ask it this question, and so within the counterfactual it knows that the situation where it gets asked the question isn’t actual. Whatever you need to do to figure out its counterfactual behavior might of course affect its algorithmic prior, but it doesn’t need to take the form of asking it the question.