Wait, really? That’s totally now how I read it. I thought the simulacra levels were divergence between public and private beliefs. People start realizing that the ‘shared map’ is chosen for reasons other than correspondence with territory, and begin to explicitly model the map-sharing processes separate from map-validating ones.
That process is incremental and I think the part you just described (where people “realize the shared map is chosen for reasons other than correspondence”) is what’s going on at level 3-4.
But really, how does this framework work when the level-1 beliefs are false? One example is a church-heavy township where everyone does actually believe their god is real (level 1, private and public beliefs match) and over time people start to question, but not publicly (level 2), then start to find reasons that religion was a useful cohesive belief, without actually believing it (level 3?).
Is there a framework for staying in level 1, but being less wrong, or including other’s beliefs in your level-1 model without getting stuck in higher levels where you forget that there IS a truth?
The intent of level-1, as I understand it, is you just say “this seems false?” and they say “why?” and you say “because X”, and that either works or doesn’t because of object level beliefs about the world. (i.e. people at level 1 have an understanding of having been mistaken)
I think I’m still confused, or maybe stuck at a low (or maybe high! unsure how to use this...) level. I do my best for my private maps and models to be predictive of future experiences. I have no expectation that I can communicate these private beliefs very well to most of humanity. I am quite willing to understand other individuals’ and groups’ statements of belief as a mix of signaling, social cohesion, manipulation, and true beliefs. I participate in communication acts for all of these purposes as well.
Does this mean I’m simultaneously at different levels for different purposes?
Does this mean I’m simultaneously at different levels for different purposes?
There’s an important difference between:
(1) Participating in fictions or pseudorepresentative communication (i.e. bullshit) while being explicitly aware of it (at least potentially, like if someone asked you whether it meant anything you’d give an unconfused answer). This is a sort of reflective, rational-postmodernist level 1.
(2) Adjusting your story for nonepistemic reasons but feeling compelled to rationalize them in a consistent way, which makes your nonepistemic narratives sticky, and contaminates your models of what’s going on. This is what Rao calls clueless in The Gervais Principle.
(3) Acting from a fundamentally social metaphysics like a level 3⁄4 player, willing to generate sophisticated “logical” rationales where convenient, but not constraining your actions based on your story. This is what cluster thinking cashes out as, as far as I can tell.
Hmm. I still suspect I’m more fluid than is implied in these models. I think I’m mostly a mix of cluster thinking (I recognize multiple conflicting models, and shift my weights between them for private beliefs, while using a different set of weights for public beliefs (because shifting others’ beliefs is relative to my model of their current position, not absolute prediction levels—Aumann doesn’t apply to humans)), and I do recognize that I will experience only one future, which I call “objective”, and that’s pretty much rational-postmodernist level 1. I watch for #2, but I’m sure I’m sometimes susceptible (stupid biological computing substrate!).
Wait, really? That’s totally now how I read it. I thought the simulacra levels were divergence between public and private beliefs. People start realizing that the ‘shared map’ is chosen for reasons other than correspondence with territory, and begin to explicitly model the map-sharing processes separate from map-validating ones.
That process is incremental and I think the part you just described (where people “realize the shared map is chosen for reasons other than correspondence”) is what’s going on at level 3-4.
But really, how does this framework work when the level-1 beliefs are false? One example is a church-heavy township where everyone does actually believe their god is real (level 1, private and public beliefs match) and over time people start to question, but not publicly (level 2), then start to find reasons that religion was a useful cohesive belief, without actually believing it (level 3?).
Is there a framework for staying in level 1, but being less wrong, or including other’s beliefs in your level-1 model without getting stuck in higher levels where you forget that there IS a truth?
The intent of level-1, as I understand it, is you just say “this seems false?” and they say “why?” and you say “because X”, and that either works or doesn’t because of object level beliefs about the world. (i.e. people at level 1 have an understanding of having been mistaken)
I think I’m still confused, or maybe stuck at a low (or maybe high! unsure how to use this...) level. I do my best for my private maps and models to be predictive of future experiences. I have no expectation that I can communicate these private beliefs very well to most of humanity. I am quite willing to understand other individuals’ and groups’ statements of belief as a mix of signaling, social cohesion, manipulation, and true beliefs. I participate in communication acts for all of these purposes as well.
Does this mean I’m simultaneously at different levels for different purposes?
There’s an important difference between:
(1) Participating in fictions or pseudorepresentative communication (i.e. bullshit) while being explicitly aware of it (at least potentially, like if someone asked you whether it meant anything you’d give an unconfused answer). This is a sort of reflective, rational-postmodernist level 1.
(2) Adjusting your story for nonepistemic reasons but feeling compelled to rationalize them in a consistent way, which makes your nonepistemic narratives sticky, and contaminates your models of what’s going on. This is what Rao calls clueless in The Gervais Principle.
(3) Acting from a fundamentally social metaphysics like a level 3⁄4 player, willing to generate sophisticated “logical” rationales where convenient, but not constraining your actions based on your story. This is what cluster thinking cashes out as, as far as I can tell.
Hmm. I still suspect I’m more fluid than is implied in these models. I think I’m mostly a mix of cluster thinking (I recognize multiple conflicting models, and shift my weights between them for private beliefs, while using a different set of weights for public beliefs (because shifting others’ beliefs is relative to my model of their current position, not absolute prediction levels—Aumann doesn’t apply to humans)), and I do recognize that I will experience only one future, which I call “objective”, and that’s pretty much rational-postmodernist level 1. I watch for #2, but I’m sure I’m sometimes susceptible (stupid biological computing substrate!).