If we’re going to question Omega at all, why not question if he’s actually going to make a prediction,
or if there’s going to be two boxes and not three, and how do we know Omega is actually as good as predicting as is claimed?
I think the principle of
Least Convenient Possible World
applies. Assume an honesty for Omega that is as inconvenient as possible for your argument.
If any condition makes one boxing seem both crazy and correct, then there’s more to be discovered about
our reasoning process.
I’m guessing that it’s the precommitment part of the problem that seems crazy.
Suppose that to precommit to one boxing, you gave Omega your word that you would one box.
Then when faced with the empty transparent box, you can make the less crazy seeming decision that
not breaking your word is worth more than $1,000.
That seems rational to me—giving up your right to make a different decision in the
future, even knowing there’s a 2% chance it will be worse is worth less than the 98%
chance of affecting Omega behavior. It’s similar to giving your word to the driver
in Parfit’s Hitchhiker problem
The point I’m making is not about Omega’s trustworthiness, but about my beliefs.
If Omega is trustworthy AND I’m confident that Omega is trustworthy, then I will one-box. The reason I will one-box is that it follows from what Omega has said that one-boxing is the right thing to do, and I believe that Omega is trustworthy. It feels completely bizarre to one-box, but that’s because it’s completely bizarre for me to believe that Omega is trustworthy; if I have already assumed the latter, then one-boxing follows naturally. It follows just as naturally with a transparent box, or with a box full of pit vipers, or with a revolver which I’m assured that, fired at my head, will net me a million dollars. If I’m confident that Omega’s claims are true, I one-box (or fire the revolver, or whatever).
If Omega is not trustworthy AND I’m confident that Omega is trustworthy, then I will still one-box. It’s just that in that far-more-ordinary scenario, doing so is a mistake.
I cannot imagine a mechanism whereby I become confident that Omega is trustworthy, but if the setup of the thought experiment presumes that I am confident, then what follows is that I one-box.
No precommitment is required. All I have to “precommit” to is to acting on the basis of what I believe to be true at the time. If that includes crazy-seeming beliefs about Omega, then the result will be crazy-seeming decisions. If those crazy-seeming beliefs are true, then the result will be crazy-seeming correct decisions.
I was assuming that Omega is a trustworthy agent.
If we’re going to question Omega at all, why not question if he’s actually going to make a prediction, or if there’s going to be two boxes and not three, and how do we know Omega is actually as good as predicting as is claimed? I think the principle of Least Convenient Possible World applies. Assume an honesty for Omega that is as inconvenient as possible for your argument.
If any condition makes one boxing seem both crazy and correct, then there’s more to be discovered about our reasoning process.
I’m guessing that it’s the precommitment part of the problem that seems crazy. Suppose that to precommit to one boxing, you gave Omega your word that you would one box. Then when faced with the empty transparent box, you can make the less crazy seeming decision that not breaking your word is worth more than $1,000.
That seems rational to me—giving up your right to make a different decision in the future, even knowing there’s a 2% chance it will be worse is worth less than the 98% chance of affecting Omega behavior. It’s similar to giving your word to the driver in Parfit’s Hitchhiker problem
The point I’m making is not about Omega’s trustworthiness, but about my beliefs.
If Omega is trustworthy AND I’m confident that Omega is trustworthy, then I will one-box. The reason I will one-box is that it follows from what Omega has said that one-boxing is the right thing to do, and I believe that Omega is trustworthy. It feels completely bizarre to one-box, but that’s because it’s completely bizarre for me to believe that Omega is trustworthy; if I have already assumed the latter, then one-boxing follows naturally. It follows just as naturally with a transparent box, or with a box full of pit vipers, or with a revolver which I’m assured that, fired at my head, will net me a million dollars. If I’m confident that Omega’s claims are true, I one-box (or fire the revolver, or whatever).
If Omega is not trustworthy AND I’m confident that Omega is trustworthy, then I will still one-box. It’s just that in that far-more-ordinary scenario, doing so is a mistake.
I cannot imagine a mechanism whereby I become confident that Omega is trustworthy, but if the setup of the thought experiment presumes that I am confident, then what follows is that I one-box.
No precommitment is required. All I have to “precommit” to is to acting on the basis of what I believe to be true at the time. If that includes crazy-seeming beliefs about Omega, then the result will be crazy-seeming decisions. If those crazy-seeming beliefs are true, then the result will be crazy-seeming correct decisions.