I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an “The solution is S!” for some promptly confabulated arbitrary S, when quantifying uncertainty isn’t even on the agenda).
(I feel the language of there being “positions” has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it’s clear what you mean.)
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an “The solution is S!” for some promptly confabulated arbitrary S, when quantifying uncertainty isn’t even on the agenda).
(I feel the language of there being “positions” has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it’s clear what you mean.)