[ epistimic status: commenting for fun, not seriously objecting. I like these posts, even if I don’t see how they further our understanding of decisions ]
Cool. I apologize if I came of a bit snarky earlier. Thanks for commenting! I read Eliezer’s post and was thinking about how to make a problem I like (even) more, and this was the result. Just for fun, mostly :)
We’re both wrong. It includes 1000 but not 1. Agreed with the “whatever” :)
Well, I defined the range. I can’t really be wrong, haha ;) But I get your point, with prime and composite, >=2 would make more sense.
That’s the problem with underspecified thought experiments. I don’t see how Omega’s prediction is possible. The reasons for 99% accuracy matter a lot. If she just kills people if they’re about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right. If she’s only tried it on idiots who think their precommitment is binding, and yours isn’t, then tricking her is right in 1 and still publicly two-box in 2.
The accuracy is something I need to learn more about at some point, but it should (I think) simply be read as “Whatever choice I make, there’s 0.99 probability Omega predicted it.”
BTW, I think you typo’d your description of one- and two-boxing. Traditionally, it’s “take box B or take both”, but you write “take box A or take both”.
Cool. I apologize if I came of a bit snarky earlier. Thanks for commenting! I read Eliezer’s post and was thinking about how to make a problem I like (even) more, and this was the result. Just for fun, mostly :)
Well, I defined the range. I can’t really be wrong, haha ;) But I get your point, with prime and composite, >=2 would make more sense.
The accuracy is something I need to learn more about at some point, but it should (I think) simply be read as “Whatever choice I make, there’s 0.99 probability Omega predicted it.”
Thanks Dagon! Fixing it.