I claim the problem is that our model is insufficient to capture our true beliefs.
There’s a difference in how we act between a coin flip (true 50⁄50) and “are bloxors greeblic?” (a question we have no info about).
For example, if our friend came and said “Yes, i know this one, the answer is (heads|yes)”. For coin flip you’d say “are you out of your mind?” and for bloxors you’d say “Ok, sure, you know better than me”
I’ve been idly pondering over this since Scott Alexander’s post. What is a better model?
One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.
I don’t love it, but it’s better than presuming anything on the base level, i think.
He doesn’t really. Here’s the original article:
https://www.astralcodexten.com/p/mr-tries-the-safe-uncertainty-fallacy
Also there was a long follow-up where he insists 50% is the right answer, but it’s subscriber-only:
https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic