Aside from double-counting, here’s a problem; you should have just set your starting priors on the false and true statements as x and 1-x respectively, where x is the chance your whole ontology is screwed up, and you’d be equally well calibrated and much more precise. You’ve correctly identified that the perfect calibration on 90% is meaningless, but that’s because you explicitly introduced a gap between what you believe to be true and what you’re representing as your beliefs. Maybe that’s your point; that people are trying to earn a rationalist merit badge by obfuscating their true beliefs, but I think at least many people treat the exercise as a serious inquiry into how well-founded beliefs feel from the inside.
Aside from double-counting, here’s a problem; you should have just set your starting priors on the false and true statements as x and 1-x respectively, where x is the chance your whole ontology is screwed up, and you’d be equally well calibrated and much more precise. You’ve correctly identified that the perfect calibration on 90% is meaningless, but that’s because you explicitly introduced a gap between what you believe to be true and what you’re representing as your beliefs. Maybe that’s your point; that people are trying to earn a rationalist merit badge by obfuscating their true beliefs, but I think at least many people treat the exercise as a serious inquiry into how well-founded beliefs feel from the inside.