If we define an imperfect predictor as a perfect predictor plus noise, i.e. produces the correct prediction with probability p regardless of the cognition algorithm it’s trying to predict, then Newcomb-like problems are very robust to imperfect prediction: for any p > .5 there is some payoff ratio great enough to preserve the paradox, and the required ratio goes down as the prediction improves. e.g. if 1-boxing gets 100 utilons and 2-boxing gets 1 utilon, then the predictor only needs to be more than 50.5% accurate. So the limit in that direction favors 1-boxing.
What other direction could there be? If the prediction accuracy depends on the algorithm-to-be-predicted (as it would in the real world), then you could try to be an algorithm that is mispredicted in your favor… but a misprediction in your favor can only occur if you actually 2-box, so it only takes a modicum of accuracy before a 1-boxer who tries to be predictable is better off than a 2-boxer who tries to be unpredictable.
I can’t see any other way for the limit to turn out.
I was raised atheist, and it wasn’t difficult at all. In fact, I only know the religion (or lack thereof) of one of my childhood friends, which I learned not because he made any statements of belief per se, but rather via his complaints about having to learn Hebrew. As for the rest of everyone I went elementary school with—we did have occasional critiques of Santa, but it never occurred to me to extol atheism because the topic of religion never came up. When I eventually learned of the great quantities of deluded people around, I had to infer that some of the kids that had never mentioned religion probably were religious, but it didn’t seem important enough for me to actually ask to determine which ones.
I don’t think I grew up in any great rationalist enclave, maybe my school was just really serious about separation of church and state?