this intervention seems pretty susceptible to not being able to distinguish between “feels useful” and “is useful”
I’m curious to hear more of your model here, even if all you have is something half-baked. Like, if you would be willing to ELI5 why this intervention seems susceptible in this way, or paint me a picture of someone thinking that it’s useful but being wrong.
There’s millions of otherwise reasonable and successful people (often people I respect) telling me that they talk to ghosts or that loading up on Vitamin Omega Delta Seventeen Power Plus is the key to perfect health.
… I am surprised by this. Mostly, I’m surprised by you assessing those people as otherwise reasonable. I think I view people’s capacity for reason as less compartmentalized, or something, and would find myself suspicious of all of their other conclusions if they talked to ghosts or loaded up on VOD17P+. Like, this wouldn’t stop them from being right-for-the-wrong-reasons, but I just wouldn’t be able to call them reasonable.
I do note that while the set of CFAR participants is not stellar in some absolute sense, it contains a much higher base rate of healthy skepticism and epistemic diligence/hygiene than most groups. Like, CFAR participants on the whole are a self-selected “at least nominally cares about what’s actually true” group, and I think I weight their self-reports accordingly? I trust the CFAR participants somewhere in between my trust for [college juniors majoring in fields that require grounding and feedback loops] and [college professors teaching in such fields], as a rough attempt to calibrate.
I’m curious to hear more of your model here, even if all you have is something half-baked. Like, if you would be willing to ELI5 why this intervention seems susceptible in this way, or paint me a picture of someone thinking that it’s useful but being wrong.
… I am surprised by this. Mostly, I’m surprised by you assessing those people as otherwise reasonable. I think I view people’s capacity for reason as less compartmentalized, or something, and would find myself suspicious of all of their other conclusions if they talked to ghosts or loaded up on VOD17P+. Like, this wouldn’t stop them from being right-for-the-wrong-reasons, but I just wouldn’t be able to call them reasonable.
I do note that while the set of CFAR participants is not stellar in some absolute sense, it contains a much higher base rate of healthy skepticism and epistemic diligence/hygiene than most groups. Like, CFAR participants on the whole are a self-selected “at least nominally cares about what’s actually true” group, and I think I weight their self-reports accordingly? I trust the CFAR participants somewhere in between my trust for [college juniors majoring in fields that require grounding and feedback loops] and [college professors teaching in such fields], as a rough attempt to calibrate.