Right, which would be silly, so I wouldn’t do that.
Oh, I see what’s confusing me. The “Interrupted” version of the classic Newcomb’s Problem is this: replace Omega with DumbBot that doesn’t even try to predict your actions, it just gives you outcomes at random. So you can’t affect your counterfactual selves, and don’t even bother—just two-box.
This problem—which I should rename to the Interrupted Ultimate Newcomb’s Problem—does require Omega. It would look like this: from Omega’s end, Omega simulates a jillion people, as you put it, and finds all the people who produce primes or nonprimes (depending on the primality of 1033), and then poses this question only to those people. From your point of view, though, you know neither the primality of 1033 nor your own eventual answer, so it seems like you can ambiently control 1033 to be composite—and the versions of you that didn’t make whatever choice you make are never part of the experiment, so who cares?
Right, which would be silly, so I wouldn’t do that.
Oh, I see what’s confusing me. The “Interrupted” version of the classic Newcomb’s Problem is this: replace Omega with DumbBot that doesn’t even try to predict your actions, it just gives you outcomes at random. So you can’t affect your counterfactual selves, and don’t even bother—just two-box.
This problem—which I should rename to the Interrupted Ultimate Newcomb’s Problem—does require Omega. It would look like this: from Omega’s end, Omega simulates a jillion people, as you put it, and finds all the people who produce primes or nonprimes (depending on the primality of 1033), and then poses this question only to those people. From your point of view, though, you know neither the primality of 1033 nor your own eventual answer, so it seems like you can ambiently control 1033 to be composite—and the versions of you that didn’t make whatever choice you make are never part of the experiment, so who cares?