[Question] What happens if we reverse Newcomb’s Paradox and replace it with two negative sums? Doesn’t it kinda maybe affirm Roko’s Basilisk?

So Newcomb’s Paradox becomes:

Box A is clear, and always contains a visible -$1,000.
Box B is opaque, and its content has already been set by the predictor:
If the predictor has predicted the player will take both boxes A and B, then box B contains nothing.
If the predictor has predicted that the player will take only box B, then box B contains -$1,000,000.
The player must take either only box B, or box A and B.

In this scenario, if the Player believes all the info laid out, the only logical choice would be to take both boxes A and B. if we take all this, and make it into:


Choice A is clear, and entails always working to further AI.
Choice B isn’t clear, and what it entails has already been set by the predictor:
If the predictor has predicted the player will take both choice A and consequence B, then consequence B entails nothing (or possibly eternal bliss).
If the predictor has predicted that the player will take only consequence B, then they have the choice to make consequence B entail “eternal damnation”.
The player must take consequence B but may also take choice A.

Then Roko’s Basilisk is true, with the two caveats being that you must believe that the “eternal damnation” will happen, and you must care about the “eternal damnation”.

That’s fair. But, uh, the Basilisk knows that we don’t believe in the damnation, so it’s perfectly logical for the AI to eternally damn the simulation (as long as this doesn’t harm humanity, which we have no reason to believe it would) because it knows that we know that it’ll do that.

In essence, we’re being a predictor of the predictor, and right now we’re predicting that the predictor will go through with their punishment, because they will have predicted us predicting that they won’t. (This is the weakest link in my idea, and I want you guys to rip it to shreds.)

Ok. that may be the case, but I still don’t care about the AI torturing some simulation. Issue is, if the AI must simulate you for this, they must simulate the entire universe, and we might be in that simulation, meaning that there is a risk that we will be directly punished by Roko’s Basilisk if it thinks that we aren’t furthering the AI.

This also (AFAIK) happens with any AI with a shred of self preservation, or any AI that thinks it’s beneficial to the human race (and values their lives more than the lives of some simulation).

I know that Roko’s Basilisk is stupid and dumb and not productive or actually meaningful, but so am I. Can you please refute this for me so I have some peace of mind?