Yeah, I didn’t know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn’t know “9” was particularly special.
Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy.
I guess I’m not seeing any middle ground between “be honest,” and “pick randomization as an action,” even for naive CDT where “be honest” gets the problem wrong.
which made me worry that somewhere out there was a method which somehow comes up with 3⁄4.
Somewhere in Stuart Armstrong’s bestiary of non-probabilistic decision procedures you can get an effective 3⁄4 on the sleeping beauty problem, but I wouldn’t worry about it—that bestiary is silly anyhow :P
Yeah, I didn’t know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn’t know “9” was particularly special.
Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy.
I guess I’m not seeing any middle ground between “be honest,” and “pick randomization as an action,” even for naive CDT where “be honest” gets the problem wrong.
Somewhere in Stuart Armstrong’s bestiary of non-probabilistic decision procedures you can get an effective 3⁄4 on the sleeping beauty problem, but I wouldn’t worry about it—that bestiary is silly anyhow :P