You’re just emphasizing the fact that you have full knowledge of the situation.
I currently believe, that if I ever am in a position where I believe myself to be confronted with Newcomb’s problem, no matter how convinced I am at that time, it will be a hoax in some way; for example, Omega has limited prediction capability or there isn’t actually $1 million in the box.
I’m not saying “you should two-box because the money is already in there” I’m saying “maybe you should JUST take the $1000 box because you’ve seen that money and if you don’t think ve’s lying you’re probably hallucinating.”
True: you will probably never be in the epistemic state in which you will justifiably believe you are in Newcomb’s problem. Nevertheless, you will frequently be in probabilistic variants of the problem, and a sane decision theory that wins on those cases will have the implication that it should one-box when you take the limit of all variables as they go to what they need to be to make it the literal Newcomb’s problem.
You’re just emphasizing the fact that you have full knowledge of the situation.
I currently believe, that if I ever am in a position where I believe myself to be confronted with Newcomb’s problem, no matter how convinced I am at that time, it will be a hoax in some way; for example, Omega has limited prediction capability or there isn’t actually $1 million in the box.
I’m not saying “you should two-box because the money is already in there” I’m saying “maybe you should JUST take the $1000 box because you’ve seen that money and if you don’t think ve’s lying you’re probably hallucinating.”
True: you will probably never be in the epistemic state in which you will justifiably believe you are in Newcomb’s problem. Nevertheless, you will frequently be in probabilistic variants of the problem, and a sane decision theory that wins on those cases will have the implication that it should one-box when you take the limit of all variables as they go to what they need to be to make it the literal Newcomb’s problem.