I agree, it’s not quite right. Signalling equilibria in which mostly ‘true’ signals are sent can evolve in the complete absence of a concept of truth, or even in the absence of any model-based reasoning behind the signals at all. Similarly, beliefs can manage to be mostly true without any explicit modeling of what beliefs are or a concept of truth.
What’s interesting to me is how the separation of concerns emerges at all.
Moreover, there isn’t an obvious decision-theoretic reason why someone might not want to think about possibilities they don’t want to come true (wouldn’t you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one’s life is going regardless of how well it is actually going.
It does seem like it’s largely that, but I’m fairly uncertain. I think there’s also a self-coordination issue (due to hyperbolic discounting and other temporal inconsistencies). You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations. (Though, why evolution crafted organisms which do something-like-hyperbolic-discounting rather than something more decision-theoretically sound is another question.)
You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations.
Why doesn’t conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)
In the scenario I’m imagining, it doesn’t apply because you don’t fully realize/propagate the fact that you’re filtering evidence for yourself. This is partly because the evidence-filtering strategy is smart enough to filter out evidence about its own activities, and partly just because agency is hard and you don’t fully propagate everything by default.
I’m intending this mostly as an ‘internal’ version of “perverse anti-epistemic social pressures”. There’s a question of why this would exist at all (since it doesn’t seem adaptive). My current guess is, some mixture of perverse anti-epistemic social pressures acting on evolutionary timelines, and (again) “agency is hard”—it’s plausible that this kind of thing emerges accidentally from otherwise useful mental architectures, and doesn’t have an easy and universally applicable fix.
I agree, it’s not quite right. Signalling equilibria in which mostly ‘true’ signals are sent can evolve in the complete absence of a concept of truth, or even in the absence of any model-based reasoning behind the signals at all. Similarly, beliefs can manage to be mostly true without any explicit modeling of what beliefs are or a concept of truth.
What’s interesting to me is how the separation of concerns emerges at all.
It does seem like it’s largely that, but I’m fairly uncertain. I think there’s also a self-coordination issue (due to hyperbolic discounting and other temporal inconsistencies). You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations. (Though, why evolution crafted organisms which do something-like-hyperbolic-discounting rather than something more decision-theoretically sound is another question.)
Why doesn’t conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)
In the scenario I’m imagining, it doesn’t apply because you don’t fully realize/propagate the fact that you’re filtering evidence for yourself. This is partly because the evidence-filtering strategy is smart enough to filter out evidence about its own activities, and partly just because agency is hard and you don’t fully propagate everything by default.
I’m intending this mostly as an ‘internal’ version of “perverse anti-epistemic social pressures”. There’s a question of why this would exist at all (since it doesn’t seem adaptive). My current guess is, some mixture of perverse anti-epistemic social pressures acting on evolutionary timelines, and (again) “agency is hard”—it’s plausible that this kind of thing emerges accidentally from otherwise useful mental architectures, and doesn’t have an easy and universally applicable fix.