I strongly agree that separation of concerns is critical, and especially the epistemic vs. instrumental separation of concerns.
There wouldn’t even be a concept of ‘belief’ or ‘claim’ if we didn’t separate out the idea of truth from all the other reasons one might believe/claim something, and optimize for it separately.
This doesn’t seem quite right. Even if everyone’s beliefs are trying to track reality, it’s still important to distinguish what people believe from what is true (see: Sally-Anne test). Similarly for claims. (The connection to simulacra is pretty clear; there’s a level-1 notion of a belief (i.e. a property of someone’s world model, the thing controlling their anticipations and which they use to evaluate different actions), and also higher-level simulacra of level-1 beliefs)
Moreover, there isn’t an obvious decision-theoretic reason why someone might not want to think about possibilities they don’t want to come true (wouldn’t you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one’s life is going regardless of how well it is actually going.
I agree, it’s not quite right. Signalling equilibria in which mostly ‘true’ signals are sent can evolve in the complete absence of a concept of truth, or even in the absence of any model-based reasoning behind the signals at all. Similarly, beliefs can manage to be mostly true without any explicit modeling of what beliefs are or a concept of truth.
What’s interesting to me is how the separation of concerns emerges at all.
Moreover, there isn’t an obvious decision-theoretic reason why someone might not want to think about possibilities they don’t want to come true (wouldn’t you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one’s life is going regardless of how well it is actually going.
It does seem like it’s largely that, but I’m fairly uncertain. I think there’s also a self-coordination issue (due to hyperbolic discounting and other temporal inconsistencies). You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations. (Though, why evolution crafted organisms which do something-like-hyperbolic-discounting rather than something more decision-theoretically sound is another question.)
You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations.
Why doesn’t conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)
In the scenario I’m imagining, it doesn’t apply because you don’t fully realize/propagate the fact that you’re filtering evidence for yourself. This is partly because the evidence-filtering strategy is smart enough to filter out evidence about its own activities, and partly just because agency is hard and you don’t fully propagate everything by default.
I’m intending this mostly as an ‘internal’ version of “perverse anti-epistemic social pressures”. There’s a question of why this would exist at all (since it doesn’t seem adaptive). My current guess is, some mixture of perverse anti-epistemic social pressures acting on evolutionary timelines, and (again) “agency is hard”—it’s plausible that this kind of thing emerges accidentally from otherwise useful mental architectures, and doesn’t have an easy and universally applicable fix.
I don’t understand the OP’s point at all, but just wanted to remark on
there isn’t an obvious decision-theoretic reason why someone might not want to think about possibilities they don’t want to come true
There absolutely are reasons like that. Beliefs affect “reality”, like in the folk theorem. If everyone believes that everyone else cooperates, then everyone would cooperate. (And defectors get severely punished.)
If I had to summarize: “Talking about feeling is often perceived as a failure of separation-of-concerns by people who are skilled at various other cognitive separations-of-concerns; but, it isn’t necessarily. In fact, if you’re really good at separation-of-concerns, you should be able to talk about feelings a lot more than otherwise. This is probably just a good thing to do, because people care about other people’s feelings”
Ah, that makes sense. Talking about feelings, to a degree, is essential to being human and being relatable. If anything, people’s minds are 90% or more about feelings.
Considering a possibility doesn’t automatically make you believe it. Why not think about the different possible Nash equilibria in order to select the best one?
Yep, thinking about different possibilities changes reality. In this particular case, it makes it worse, since mutual cooperation (super-rationality, twin prisoner’s dilemma, etc.) has by definition the highest payoff in symmetric games.
Wait. Some thoughts enable actions, which can change reality. Some thoughts may be directly detectable and thereby change reality (say, pausing before answering a question, or viewers watching an fMRI as you’re thinking different things). But very few hypothetical and counterfactual thoughts in today’s humans actually effect reality in either of these ways.
Are you claiming that someone who understands cooperation and superrationality can change reality by thinking more about it than usual, or just that knowledge increases the search space and selection power over potential actions?
In practice, a lot of things about one person’s attitudes toward cooperation ‘leak out’ to others (as in, are moderately detectable). This includes reading things like pauses before making decisions, which means that merely thinking about an alternative can end up changing the outcome of a situation.
I strongly agree that separation of concerns is critical, and especially the epistemic vs. instrumental separation of concerns.
This doesn’t seem quite right. Even if everyone’s beliefs are trying to track reality, it’s still important to distinguish what people believe from what is true (see: Sally-Anne test). Similarly for claims. (The connection to simulacra is pretty clear; there’s a level-1 notion of a belief (i.e. a property of someone’s world model, the thing controlling their anticipations and which they use to evaluate different actions), and also higher-level simulacra of level-1 beliefs)
Moreover, there isn’t an obvious decision-theoretic reason why someone might not want to think about possibilities they don’t want to come true (wouldn’t you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one’s life is going regardless of how well it is actually going.
I agree, it’s not quite right. Signalling equilibria in which mostly ‘true’ signals are sent can evolve in the complete absence of a concept of truth, or even in the absence of any model-based reasoning behind the signals at all. Similarly, beliefs can manage to be mostly true without any explicit modeling of what beliefs are or a concept of truth.
What’s interesting to me is how the separation of concerns emerges at all.
It does seem like it’s largely that, but I’m fairly uncertain. I think there’s also a self-coordination issue (due to hyperbolic discounting and other temporal inconsistencies). You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations. (Though, why evolution crafted organisms which do something-like-hyperbolic-discounting rather than something more decision-theoretically sound is another question.)
Why doesn’t conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)
In the scenario I’m imagining, it doesn’t apply because you don’t fully realize/propagate the fact that you’re filtering evidence for yourself. This is partly because the evidence-filtering strategy is smart enough to filter out evidence about its own activities, and partly just because agency is hard and you don’t fully propagate everything by default.
I’m intending this mostly as an ‘internal’ version of “perverse anti-epistemic social pressures”. There’s a question of why this would exist at all (since it doesn’t seem adaptive). My current guess is, some mixture of perverse anti-epistemic social pressures acting on evolutionary timelines, and (again) “agency is hard”—it’s plausible that this kind of thing emerges accidentally from otherwise useful mental architectures, and doesn’t have an easy and universally applicable fix.
I don’t understand the OP’s point at all, but just wanted to remark on
There absolutely are reasons like that. Beliefs affect “reality”, like in the folk theorem. If everyone believes that everyone else cooperates, then everyone would cooperate. (And defectors get severely punished.)
If I had to summarize: “Talking about feeling is often perceived as a failure of separation-of-concerns by people who are skilled at various other cognitive separations-of-concerns; but, it isn’t necessarily. In fact, if you’re really good at separation-of-concerns, you should be able to talk about feelings a lot more than otherwise. This is probably just a good thing to do, because people care about other people’s feelings”
Ah, that makes sense. Talking about feelings, to a degree, is essential to being human and being relatable. If anything, people’s minds are 90% or more about feelings.
Considering a possibility doesn’t automatically make you believe it. Why not think about the different possible Nash equilibria in order to select the best one?
Yep, thinking about different possibilities changes reality. In this particular case, it makes it worse, since mutual cooperation (super-rationality, twin prisoner’s dilemma, etc.) has by definition the highest payoff in symmetric games.
Wait. Some thoughts enable actions, which can change reality. Some thoughts may be directly detectable and thereby change reality (say, pausing before answering a question, or viewers watching an fMRI as you’re thinking different things). But very few hypothetical and counterfactual thoughts in today’s humans actually effect reality in either of these ways.
Are you claiming that someone who understands cooperation and superrationality can change reality by thinking more about it than usual, or just that knowledge increases the search space and selection power over potential actions?
In practice, a lot of things about one person’s attitudes toward cooperation ‘leak out’ to others (as in, are moderately detectable). This includes reading things like pauses before making decisions, which means that merely thinking about an alternative can end up changing the outcome of a situation.