If the note predicts the outcome of your decision after seeing the note, then you are free to diagonalize the note (do the opposite of what it predicts). Which would then contradict the premise that the predictor is good at making predictions (if the prediction must be this detailed and the note must remain available, even when diagonalized), because the prediction is going to be mostly wrong by construction, whatever it is. Transparent Newcomb for example is designed to avoid this issue, while gesturing at a similar phenomenon.
This kind of frame breaking thought experiment is not useful for illustrating the framings it breaks (in this case, FDT). It can be useful for illustrating or motivating some different (maybe novel) framing that does manage to make sense of the new thought experiment, but that’s only productive when that happens, and it’s easy to break framings (as opposed to finding a within-framing centrally-there error in an existing theory) without motivating any additional insight. So this is somewhat useful to recognize, to avoid too much unproductive confusion when the framings that usually make sense get broken.
In my scenario there is probability pT that the note is truthful (and the value of pT is assumed known to the agent making the decision). It is possible that pT = 1, but only for pN < 10^-24 so as to preserve the maximum 10^-24 probability of the predictor being incorrect.
If the note predicts the outcome of your decision after seeing the note, then you are free to diagonalize the note (do the opposite of what it predicts). Which would then contradict the premise that the predictor is good at making predictions (if the prediction must be this detailed and the note must remain available, even when diagonalized), because the prediction is going to be mostly wrong by construction, whatever it is. Transparent Newcomb for example is designed to avoid this issue, while gesturing at a similar phenomenon.
This kind of frame breaking thought experiment is not useful for illustrating the framings it breaks (in this case, FDT). It can be useful for illustrating or motivating some different (maybe novel) framing that does manage to make sense of the new thought experiment, but that’s only productive when that happens, and it’s easy to break framings (as opposed to finding a within-framing centrally-there error in an existing theory) without motivating any additional insight. So this is somewhat useful to recognize, to avoid too much unproductive confusion when the framings that usually make sense get broken.
In my scenario there is probability pT that the note is truthful (and the value of pT is assumed known to the agent making the decision). It is possible that pT = 1, but only for pN < 10^-24 so as to preserve the maximum 10^-24 probability of the predictor being incorrect.