No amount of deeply held belief prevents you from deciding to immediately start multiplying the odds ratio reported by your own intuition by 100 when formulating an endorsed-on-reflection estimate
Existing beliefs, memories, etc. would be the past-oriented propagation limiter, but there’s also future-oriented propagation limiters, mainly memory space, retention, and cuing for habit integration. You can ‘decide’ to do that, but will you actually do it every time?
For most people, I also think that the initial connection from “hearing the advice” to “deciding internally to change the cognitive habit in a way that will actually do anything” is nowhere near automatic, and the set point for “how nice people seem to you by default” is deeply ingrained and hard to budge.
Maybe I should say more explicitly that the the issue is advice being directional, and any non-directional considerations don’t have this problem
I have a broad sympathy for “directional advice is dangerously relative to an existing state and tends to change the state in its delivery” as a heuristic. I don’t see the OP as ‘advice’ in a way where this becomes relevant, though; I see the heuristic as mainly useful as applied to more performative speech acts within a recognizable group of people, whereas I read the OP as introducing the phenomenon from a distance as a topic of discussion, covering a fuzzy but enormous group of people of which ~100% are not reading any of this, decoupling it even further from the reader potentially changing their habits as a result.
And per above, I still see the expected level of frame inertia and the expected delivery impedance as both being so stratospherically high for this particular message that the latter half of the heuristic basically vanishes, and it still sounds to me like you disagree:
The last step from such additional considerations to the overall conclusion would then need to be taken by each reader on their own, they would need to decide on their own if they were overestimating or underestimating something previously, at which point it will cease being the case that they are overestimating or underestimating it in a direction known to them.
Your description continues to return to the “at which point” formulation, which I think is doing an awful lot of work in presenting (what I see as) a long and involved process as though it were a trivial one. Or: you continue to describe what sounds like an eventual equilibrium state with the implication that it’s relevant in practice to whether this type of anti-inductive message has a usable truth value over time, but I think that for this message, the equilibrium is mainly a theoretical distraction because the time and energy scales at which it would appreciably occur are out of range. I’m guessing this is from some combination of treating “readers of the OP” as the semi-coherent target group above and/or having radically different intuitions on the usual fluidity of the habit change in question—maybe related, if you think the latter follows from the former due to selection effects? Is one or both of those the main place where we disagree?
Anti-inductive advice isn’t dangerous or useless, it’s just a poor form for its content, it’s better to formulate such things differently so that they don’t have this issue. The argument for why it’s poor form doesn’t have this particular piece of advice (if it’s to be taken as advice at all) as a central example, but a particular thing not being a central example for some argument doesn’t weaken the argument when it’s considered in its own right.
Like with the end of the world, the point isn’t that it’s something that happens sooner than in 20 years, but that it’s going to happen at some point, and wasting another 20 years on not doing anything about it isn’t the takeaway from predicting that it’ll take longer.
Existing beliefs, memories, etc. would be the past-oriented propagation limiter, but there’s also future-oriented propagation limiters, mainly memory space, retention, and cuing for habit integration. You can ‘decide’ to do that, but will you actually do it every time?
For most people, I also think that the initial connection from “hearing the advice” to “deciding internally to change the cognitive habit in a way that will actually do anything” is nowhere near automatic, and the set point for “how nice people seem to you by default” is deeply ingrained and hard to budge.
I have a broad sympathy for “directional advice is dangerously relative to an existing state and tends to change the state in its delivery” as a heuristic. I don’t see the OP as ‘advice’ in a way where this becomes relevant, though; I see the heuristic as mainly useful as applied to more performative speech acts within a recognizable group of people, whereas I read the OP as introducing the phenomenon from a distance as a topic of discussion, covering a fuzzy but enormous group of people of which ~100% are not reading any of this, decoupling it even further from the reader potentially changing their habits as a result.
And per above, I still see the expected level of frame inertia and the expected delivery impedance as both being so stratospherically high for this particular message that the latter half of the heuristic basically vanishes, and it still sounds to me like you disagree:
Your description continues to return to the “at which point” formulation, which I think is doing an awful lot of work in presenting (what I see as) a long and involved process as though it were a trivial one. Or: you continue to describe what sounds like an eventual equilibrium state with the implication that it’s relevant in practice to whether this type of anti-inductive message has a usable truth value over time, but I think that for this message, the equilibrium is mainly a theoretical distraction because the time and energy scales at which it would appreciably occur are out of range. I’m guessing this is from some combination of treating “readers of the OP” as the semi-coherent target group above and/or having radically different intuitions on the usual fluidity of the habit change in question—maybe related, if you think the latter follows from the former due to selection effects? Is one or both of those the main place where we disagree?
Anti-inductive advice isn’t dangerous or useless, it’s just a poor form for its content, it’s better to formulate such things differently so that they don’t have this issue. The argument for why it’s poor form doesn’t have this particular piece of advice (if it’s to be taken as advice at all) as a central example, but a particular thing not being a central example for some argument doesn’t weaken the argument when it’s considered in its own right.
Like with the end of the world, the point isn’t that it’s something that happens sooner than in 20 years, but that it’s going to happen at some point, and wasting another 20 years on not doing anything about it isn’t the takeaway from predicting that it’ll take longer.