I’ve been curious why all the formulations of Newcomb’s I’ve read give Omega/Predictor an error rate at all. Is it just to preempt reasoning along the lines of “well he never makes an error that means he is a god so I one-box” or is there a more subtle, problem-relevant reason that I’m missing?
It’s to forestall arguments about “impossible epistemic states”. The difference between a 1% error rate and a 0% error rate is 1%, so your answer shouldn’t change (regarding certainty as valuable results in getting Dutch Booked). If you don’t permit an error rate then many people will refuse to answer based solely on certainty in infallibility being impossible.
Well, the quoted version being used here posits that I have “knowledge of the Predictor’s infallibility” and doesn’t give an error rate. So there’s one counterexample, at least.
Of course, “knowledge” doesn’t mean I have a confidence of exactly 1 -- Predictor may be infallible, but I’m not. If Predictor is significantly more baseline-accurate than I am, then for EV calculations the primary factor to consider is my level of confidence in the things I “know,” and Predictor’s exact error rate is noise by comparison.
In practice I would say that if I somehow found myself in the state where I knew Predictor was infalllible the first thing I should do is ask myself how I came to know that, and whether I endorse my current confidence in that conclusion on reflection based on those conditions.
But I don’t think any of that is terribly relevant. I mean, OK, I find myself instead in the state where I know Predictor is infalllible and I remember concluding a moment earlier that I reflectively endorse my current confidence in that conclusion. To re-evaluate again seems insane. What do I do next?
Omega has been observed to have a less than 1% error rate, I assume.
I’ve been curious why all the formulations of Newcomb’s I’ve read give Omega/Predictor an error rate at all. Is it just to preempt reasoning along the lines of “well he never makes an error that means he is a god so I one-box” or is there a more subtle, problem-relevant reason that I’m missing?
It’s to forestall arguments about “impossible epistemic states”. The difference between a 1% error rate and a 0% error rate is 1%, so your answer shouldn’t change (regarding certainty as valuable results in getting Dutch Booked). If you don’t permit an error rate then many people will refuse to answer based solely on certainty in infallibility being impossible.
Well, the quoted version being used here posits that I have “knowledge of the Predictor’s infallibility” and doesn’t give an error rate. So there’s one counterexample, at least.
Of course, “knowledge” doesn’t mean I have a confidence of exactly 1 -- Predictor may be infallible, but I’m not. If Predictor is significantly more baseline-accurate than I am, then for EV calculations the primary factor to consider is my level of confidence in the things I “know,” and Predictor’s exact error rate is noise by comparison.
In practice I would say that if I somehow found myself in the state where I knew Predictor was infalllible the first thing I should do is ask myself how I came to know that, and whether I endorse my current confidence in that conclusion on reflection based on those conditions.
But I don’t think any of that is terribly relevant. I mean, OK, I find myself instead in the state where I know Predictor is infalllible and I remember concluding a moment earlier that I reflectively endorse my current confidence in that conclusion. To re-evaluate again seems insane. What do I do next?
Yes, more at Wikipedia.