I don’t know the literature around Newcomb’s problem very well, so excuse me if this is stupid. BUT: why not just reason as follows:
If the superintelligence can predict your action, one of the following two things must be the case:
a) the state of affairs whether you pick the box or not is already absolutely determined (i.e. we live in a fatalistic universe, at least with respect to your box-picking)
b) your box picking is not determined, but it has backwards causal force, i.e. something is moving backwards through time.
If a), then practical reason is meaningless anyway: you’ll do what you’ll do, so stop stressing about it.
If b), then you should be a one-boxer for perfectly ordinary rational reasons, namely that it brings it about that you get a million bucks with probability 1.
I don’t know the literature around Newcomb’s problem very well, so excuse me if this is stupid. BUT: why not just reason as follows:
If the superintelligence can predict your action, one of the following two things must be the case:
a) the state of affairs whether you pick the box or not is already absolutely determined (i.e. we live in a fatalistic universe, at least with respect to your box-picking)
b) your box picking is not determined, but it has backwards causal force, i.e. something is moving backwards through time.
If a), then practical reason is meaningless anyway: you’ll do what you’ll do, so stop stressing about it.
If b), then you should be a one-boxer for perfectly ordinary rational reasons, namely that it brings it about that you get a million bucks with probability 1.
So there’s no problem!