I can’t emphasize enough how important the thing you’re mentioning here is, and I believe it points to the crux of the issue more directly than most other things that have been said so far.
We can often weakman postmodernism as making basically the same claim, but this doesn’t change the fact that a lot of people are running an algorithm in their head with the textual description “there is no outside reality, only things that happen in my mind.” This algorithm seems to produce different behaviors in people than if they were running the algorithm “outside reality exists and is important.” I think the first algorithm tends to produce behaviors that are a lot more dangerous than the latter, even though it’s always possible to make philosophical arguments that make one algorithm seem much more likely to be “true” than the other. It’s crucial to realize that not everyone is running the perfectly steelmanned version of such algorithms to do with updating our beliefs based on observations of the processes of how we update on our beliefs, and such things are very tricky to get right.
Even though it’s valid to make observations of the form “I observe that I am running a process that produces the belief X in me”, it is definitely very risky to create a social norm that says such statements are superior to statements like “X is true” because such norms create the tendency to assign less validity to statements like “X is true”. In other words, such a norm can itself become a process that produces the belief “X is not true” when we don’t necessarily want to move our beliefs on X just because we begin to understand how the processes work. It’s very easy to go from “X is true” to “I observe I believe X is true” to “I observe there are social and emotional influences on my beliefs” to “There are social and emotional influences on my belief in X” to finally “X is not true” and I can’t help but feel a mistake is being made somewhere in that process.
I can’t emphasize enough how important the thing you’re mentioning here is, and I believe it points to the crux of the issue more directly than most other things that have been said so far.
We can often weakman postmodernism as making basically the same claim, but this doesn’t change the fact that a lot of people are running an algorithm in their head with the textual description “there is no outside reality, only things that happen in my mind.” This algorithm seems to produce different behaviors in people than if they were running the algorithm “outside reality exists and is important.” I think the first algorithm tends to produce behaviors that are a lot more dangerous than the latter, even though it’s always possible to make philosophical arguments that make one algorithm seem much more likely to be “true” than the other. It’s crucial to realize that not everyone is running the perfectly steelmanned version of such algorithms to do with updating our beliefs based on observations of the processes of how we update on our beliefs, and such things are very tricky to get right.
Even though it’s valid to make observations of the form “I observe that I am running a process that produces the belief X in me”, it is definitely very risky to create a social norm that says such statements are superior to statements like “X is true” because such norms create the tendency to assign less validity to statements like “X is true”. In other words, such a norm can itself become a process that produces the belief “X is not true” when we don’t necessarily want to move our beliefs on X just because we begin to understand how the processes work. It’s very easy to go from “X is true” to “I observe I believe X is true” to “I observe there are social and emotional influences on my beliefs” to “There are social and emotional influences on my belief in X” to finally “X is not true” and I can’t help but feel a mistake is being made somewhere in that process.