The most charitable interpretation would just be that there happened to be a convincing technical theory which said you should two-box, because it took an even more technical theory to explain why you should one-box and this was not constructed, along with the rest of the edifice to explain what one-boxing means in terms of epistemic models, concepts of instrumental rationality, the relation to traditional philosophy’s ‘free will problem’, etcetera. In other words, they simply bad-lucked onto an edifice of persuasive, technical, but ultimately incorrect argument.
We could guess other motives for people to two-box, like memetic pressure for partial counterintuitiveness, but why go to that effort now? Better TDT writeups are on the way, and eventually we’ll get to see what the field says about the improved TDT writeups. If it’s important to know what other hidden motives might be at work, we’ll have a better idea after we negate the usually-stated motive of, “The only good technical theory we have says you should two-box.” Perhaps the field will experience a large conversion once presented with a good enough writeup and then we’ll know there weren’t any other significant motives.
I guess we need a charitable interpretation of “People are crazy, the world is mad”—people are very much crazier than they theoretically could be (insert discussion of free will).
I believe that people do very much more good (defined as life support for people) than harm, based on an argument from principles. If people didn’t pour more negentropy into the human race than they take out, entropy would guarantee that the human race would cease to exist. The good that people do for themselves is included in the calculation.
FWIW, when I first read about the problem I took two-boxing to be the obviously correct answer (I wasn’t a compatibilist back then), and I didn’t change my mind until I read Less Wrong.
Anecdotal evidence amongst people I’ve questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no ‘free will / soul’). Many conversations simply devolve into ‘Omega can’t actually make a such an accurate prediction about my choice therefore or I’d normally 2 box so I’m not getting my million anyhow’.
The most charitable interpretation would just be that there happened to be a convincing technical theory which said you should two-box, because it took an even more technical theory to explain why you should one-box and this was not constructed, along with the rest of the edifice to explain what one-boxing means in terms of epistemic models, concepts of instrumental rationality, the relation to traditional philosophy’s ‘free will problem’, etcetera. In other words, they simply bad-lucked onto an edifice of persuasive, technical, but ultimately incorrect argument.
We could guess other motives for people to two-box, like memetic pressure for partial counterintuitiveness, but why go to that effort now? Better TDT writeups are on the way, and eventually we’ll get to see what the field says about the improved TDT writeups. If it’s important to know what other hidden motives might be at work, we’ll have a better idea after we negate the usually-stated motive of, “The only good technical theory we have says you should two-box.” Perhaps the field will experience a large conversion once presented with a good enough writeup and then we’ll know there weren’t any other significant motives.
Do you have an ETA on that? All my HPMoR anticipations combined don’t equal my desire to see this published and discussed.
August. (I’m writing one.)
This reply confused me at first because it seems to be answering a different (ie. inverted) question to the one asked by the post.
One-boxing is normal and does not call out for an explanation. :)
If people who aren’t crazy in a world that is mad? That certainly calls out for an explanation. In case it is reproducible!
I guess we need a charitable interpretation of “People are crazy, the world is mad”—people are very much crazier than they theoretically could be (insert discussion of free will).
I believe that people do very much more good (defined as life support for people) than harm, based on an argument from principles. If people didn’t pour more negentropy into the human race than they take out, entropy would guarantee that the human race would cease to exist. The good that people do for themselves is included in the calculation.
What is the definition of TDT? Google wasn’t helpful.
Timeless decision theory. UDT = Updateless decision theory.
FWIW, when I first read about the problem I took two-boxing to be the obviously correct answer (I wasn’t a compatibilist back then), and I didn’t change my mind until I read Less Wrong.
Anecdotal evidence amongst people I’ve questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no ‘free will / soul’). Many conversations simply devolve into ‘Omega can’t actually make a such an accurate prediction about my choice therefore or I’d normally 2 box so I’m not getting my million anyhow’.