As soon as you convincingly argue that there is an underestimation, it goes away. So this form of advice shouldn’t hold, it’s anti-inductive, its claims stop being true once observed. Any knowable bias immediately turns into unknowable miscalibration, as soon as you notice it and adjust.
What’s useful is pointing out neglected questions, where people might’ve never attempted that first step of calibration, in whatever direction they would immediately adjust once they try. But also if it’s not obvious to them in which direction they should adjust, concise advice shouldn’t help.
As soon as you convincingly argue that there is an underestimation, it goes away.
… provided that it can be propagated to all the other beliefs, thoughts, etc. that it would affect.
In a human mind, I think the dense version of this looks similar to deep grief processing (because that’s a prominent example of where a high propagation load suddenly shows up and is really salient and important), and the sparse version looks more like a many-year-long sequence of “oh wait, I should correct for” moments which individually have a high chance to not occur if they’re crowded out, and the sparse version is much more common (and even the dense version usually trails off into it to some degree).
There’s probably intermediary versions of this where broad updates can occur smoothly but rapidly in an environment with (usually social) persistent feedback, like going through a training course, but that’s a lot more intense than just having something pointed out to you.
Possibly I went a little overboard with the simplifying qualifiers of “immediately”, which distracted from the point I was making, though I do think they apply to each individual claim. No amount of deeply held belief prevents you from deciding to immediately start multiplying the odds ratio reported by your own intuition by 100 when formulating an endorsed-on-reflection estimate, not waiting for the intuition to adjust to do that, even as it’s important to have the intuition adjust eventually (and come back with any subtler second-order corrections).
Maybe I should say more explicitly that the the issue is advice being directional, and any non-directional considerations don’t have this problem, such as actually forecasting something (in a way that’s not relative to the readers’ own beliefs). One constructive way of fixing the issue is then to discuss some piece of argument or evidence that would in some way contribute to a deeper conclusion, rather than discussing a directional change in the overall conclusion (which would have this anti-inductive character), or forecasting the overall conclusion directly (which might be too complicated or non-legible, either within a short communication or at all). The last step from such additional considerations to the overall conclusion would then need to be taken by each reader on their own, they would need to decide on their own if they were overestimating or underestimating something previously, at which point it will cease being the case that they are overestimating or underestimating it in a direction known to them.
So caveating the point about updates being immediate is fair enough, even as I don’t see how this caveat might affect my intended central claims about the issues with directional advice about levels of credence, if this advice is to be taken literally as a claim of fact. Which might even not be the intended meaning in this case, though the criticism would still apply to the cases where the words of advice have the more straightforward meaning.
No amount of deeply held belief prevents you from deciding to immediately start multiplying the odds ratio reported by your own intuition by 100 when formulating an endorsed-on-reflection estimate
Existing beliefs, memories, etc. would be the past-oriented propagation limiter, but there’s also future-oriented propagation limiters, mainly memory space, retention, and cuing for habit integration. You can ‘decide’ to do that, but will you actually do it every time?
For most people, I also think that the initial connection from “hearing the advice” to “deciding internally to change the cognitive habit in a way that will actually do anything” is nowhere near automatic, and the set point for “how nice people seem to you by default” is deeply ingrained and hard to budge.
Maybe I should say more explicitly that the the issue is advice being directional, and any non-directional considerations don’t have this problem
I have a broad sympathy for “directional advice is dangerously relative to an existing state and tends to change the state in its delivery” as a heuristic. I don’t see the OP as ‘advice’ in a way where this becomes relevant, though; I see the heuristic as mainly useful as applied to more performative speech acts within a recognizable group of people, whereas I read the OP as introducing the phenomenon from a distance as a topic of discussion, covering a fuzzy but enormous group of people of which ~100% are not reading any of this, decoupling it even further from the reader potentially changing their habits as a result.
And per above, I still see the expected level of frame inertia and the expected delivery impedance as both being so stratospherically high for this particular message that the latter half of the heuristic basically vanishes, and it still sounds to me like you disagree:
The last step from such additional considerations to the overall conclusion would then need to be taken by each reader on their own, they would need to decide on their own if they were overestimating or underestimating something previously, at which point it will cease being the case that they are overestimating or underestimating it in a direction known to them.
Your description continues to return to the “at which point” formulation, which I think is doing an awful lot of work in presenting (what I see as) a long and involved process as though it were a trivial one. Or: you continue to describe what sounds like an eventual equilibrium state with the implication that it’s relevant in practice to whether this type of anti-inductive message has a usable truth value over time, but I think that for this message, the equilibrium is mainly a theoretical distraction because the time and energy scales at which it would appreciably occur are out of range. I’m guessing this is from some combination of treating “readers of the OP” as the semi-coherent target group above and/or having radically different intuitions on the usual fluidity of the habit change in question—maybe related, if you think the latter follows from the former due to selection effects? Is one or both of those the main place where we disagree?
Anti-inductive advice isn’t dangerous or useless, it’s just a poor form for its content, it’s better to formulate such things differently so that they don’t have this issue. The argument for why it’s poor form doesn’t have this particular piece of advice (if it’s to be taken as advice at all) as a central example, but a particular thing not being a central example for some argument doesn’t weaken the argument when it’s considered in its own right.
Like with the end of the world, the point isn’t that it’s something that happens sooner than in 20 years, but that it’s going to happen at some point, and wasting another 20 years on not doing anything about it isn’t the takeaway from predicting that it’ll take longer.
As soon as you convincingly argue that there is an underestimation, it goes away
It’s not a belief. It’s an entire cognitive profile that affects how they relate to and interact with other people, and the wrong beliefs are adaptive. For nice people, treating other people you know as nice-until-proven-evil opens up a much wider spectrum of cooperative interactions. For evil people, genuinely believing the people around you are just as self-interested gives you a bit more cover to be self-interested too.
As soon as you convincingly argue that there is an underestimation, it goes away. So this form of advice shouldn’t hold, it’s anti-inductive, its claims stop being true once observed. Any knowable bias immediately turns into unknowable miscalibration, as soon as you notice it and adjust.
What’s useful is pointing out neglected questions, where people might’ve never attempted that first step of calibration, in whatever direction they would immediately adjust once they try. But also if it’s not obvious to them in which direction they should adjust, concise advice shouldn’t help.
… provided that it can be propagated to all the other beliefs, thoughts, etc. that it would affect.
In a human mind, I think the dense version of this looks similar to deep grief processing (because that’s a prominent example of where a high propagation load suddenly shows up and is really salient and important), and the sparse version looks more like a many-year-long sequence of “oh wait, I should correct for” moments which individually have a high chance to not occur if they’re crowded out, and the sparse version is much more common (and even the dense version usually trails off into it to some degree).
There’s probably intermediary versions of this where broad updates can occur smoothly but rapidly in an environment with (usually social) persistent feedback, like going through a training course, but that’s a lot more intense than just having something pointed out to you.
Possibly I went a little overboard with the simplifying qualifiers of “immediately”, which distracted from the point I was making, though I do think they apply to each individual claim. No amount of deeply held belief prevents you from deciding to immediately start multiplying the odds ratio reported by your own intuition by 100 when formulating an endorsed-on-reflection estimate, not waiting for the intuition to adjust to do that, even as it’s important to have the intuition adjust eventually (and come back with any subtler second-order corrections).
Maybe I should say more explicitly that the the issue is advice being directional, and any non-directional considerations don’t have this problem, such as actually forecasting something (in a way that’s not relative to the readers’ own beliefs). One constructive way of fixing the issue is then to discuss some piece of argument or evidence that would in some way contribute to a deeper conclusion, rather than discussing a directional change in the overall conclusion (which would have this anti-inductive character), or forecasting the overall conclusion directly (which might be too complicated or non-legible, either within a short communication or at all). The last step from such additional considerations to the overall conclusion would then need to be taken by each reader on their own, they would need to decide on their own if they were overestimating or underestimating something previously, at which point it will cease being the case that they are overestimating or underestimating it in a direction known to them.
So caveating the point about updates being immediate is fair enough, even as I don’t see how this caveat might affect my intended central claims about the issues with directional advice about levels of credence, if this advice is to be taken literally as a claim of fact. Which might even not be the intended meaning in this case, though the criticism would still apply to the cases where the words of advice have the more straightforward meaning.
Existing beliefs, memories, etc. would be the past-oriented propagation limiter, but there’s also future-oriented propagation limiters, mainly memory space, retention, and cuing for habit integration. You can ‘decide’ to do that, but will you actually do it every time?
For most people, I also think that the initial connection from “hearing the advice” to “deciding internally to change the cognitive habit in a way that will actually do anything” is nowhere near automatic, and the set point for “how nice people seem to you by default” is deeply ingrained and hard to budge.
I have a broad sympathy for “directional advice is dangerously relative to an existing state and tends to change the state in its delivery” as a heuristic. I don’t see the OP as ‘advice’ in a way where this becomes relevant, though; I see the heuristic as mainly useful as applied to more performative speech acts within a recognizable group of people, whereas I read the OP as introducing the phenomenon from a distance as a topic of discussion, covering a fuzzy but enormous group of people of which ~100% are not reading any of this, decoupling it even further from the reader potentially changing their habits as a result.
And per above, I still see the expected level of frame inertia and the expected delivery impedance as both being so stratospherically high for this particular message that the latter half of the heuristic basically vanishes, and it still sounds to me like you disagree:
Your description continues to return to the “at which point” formulation, which I think is doing an awful lot of work in presenting (what I see as) a long and involved process as though it were a trivial one. Or: you continue to describe what sounds like an eventual equilibrium state with the implication that it’s relevant in practice to whether this type of anti-inductive message has a usable truth value over time, but I think that for this message, the equilibrium is mainly a theoretical distraction because the time and energy scales at which it would appreciably occur are out of range. I’m guessing this is from some combination of treating “readers of the OP” as the semi-coherent target group above and/or having radically different intuitions on the usual fluidity of the habit change in question—maybe related, if you think the latter follows from the former due to selection effects? Is one or both of those the main place where we disagree?
Anti-inductive advice isn’t dangerous or useless, it’s just a poor form for its content, it’s better to formulate such things differently so that they don’t have this issue. The argument for why it’s poor form doesn’t have this particular piece of advice (if it’s to be taken as advice at all) as a central example, but a particular thing not being a central example for some argument doesn’t weaken the argument when it’s considered in its own right.
Like with the end of the world, the point isn’t that it’s something that happens sooner than in 20 years, but that it’s going to happen at some point, and wasting another 20 years on not doing anything about it isn’t the takeaway from predicting that it’ll take longer.
It’s not a belief. It’s an entire cognitive profile that affects how they relate to and interact with other people, and the wrong beliefs are adaptive. For nice people, treating other people you know as nice-until-proven-evil opens up a much wider spectrum of cooperative interactions. For evil people, genuinely believing the people around you are just as self-interested gives you a bit more cover to be self-interested too.
You might be inferring an implicit “all” before “bad[/nice] people” where an implicit “many” was intended.