The intent of level-1, as I understand it, is you just say “this seems false?” and they say “why?” and you say “because X”, and that either works or doesn’t because of object level beliefs about the world. (i.e. people at level 1 have an understanding of having been mistaken)
I think I’m still confused, or maybe stuck at a low (or maybe high! unsure how to use this...) level. I do my best for my private maps and models to be predictive of future experiences. I have no expectation that I can communicate these private beliefs very well to most of humanity. I am quite willing to understand other individuals’ and groups’ statements of belief as a mix of signaling, social cohesion, manipulation, and true beliefs. I participate in communication acts for all of these purposes as well.
Does this mean I’m simultaneously at different levels for different purposes?
Does this mean I’m simultaneously at different levels for different purposes?
There’s an important difference between:
(1) Participating in fictions or pseudorepresentative communication (i.e. bullshit) while being explicitly aware of it (at least potentially, like if someone asked you whether it meant anything you’d give an unconfused answer). This is a sort of reflective, rational-postmodernist level 1.
(2) Adjusting your story for nonepistemic reasons but feeling compelled to rationalize them in a consistent way, which makes your nonepistemic narratives sticky, and contaminates your models of what’s going on. This is what Rao calls clueless in The Gervais Principle.
(3) Acting from a fundamentally social metaphysics like a level 3⁄4 player, willing to generate sophisticated “logical” rationales where convenient, but not constraining your actions based on your story. This is what cluster thinking cashes out as, as far as I can tell.
Hmm. I still suspect I’m more fluid than is implied in these models. I think I’m mostly a mix of cluster thinking (I recognize multiple conflicting models, and shift my weights between them for private beliefs, while using a different set of weights for public beliefs (because shifting others’ beliefs is relative to my model of their current position, not absolute prediction levels—Aumann doesn’t apply to humans)), and I do recognize that I will experience only one future, which I call “objective”, and that’s pretty much rational-postmodernist level 1. I watch for #2, but I’m sure I’m sometimes susceptible (stupid biological computing substrate!).
The intent of level-1, as I understand it, is you just say “this seems false?” and they say “why?” and you say “because X”, and that either works or doesn’t because of object level beliefs about the world. (i.e. people at level 1 have an understanding of having been mistaken)
I think I’m still confused, or maybe stuck at a low (or maybe high! unsure how to use this...) level. I do my best for my private maps and models to be predictive of future experiences. I have no expectation that I can communicate these private beliefs very well to most of humanity. I am quite willing to understand other individuals’ and groups’ statements of belief as a mix of signaling, social cohesion, manipulation, and true beliefs. I participate in communication acts for all of these purposes as well.
Does this mean I’m simultaneously at different levels for different purposes?
There’s an important difference between:
(1) Participating in fictions or pseudorepresentative communication (i.e. bullshit) while being explicitly aware of it (at least potentially, like if someone asked you whether it meant anything you’d give an unconfused answer). This is a sort of reflective, rational-postmodernist level 1.
(2) Adjusting your story for nonepistemic reasons but feeling compelled to rationalize them in a consistent way, which makes your nonepistemic narratives sticky, and contaminates your models of what’s going on. This is what Rao calls clueless in The Gervais Principle.
(3) Acting from a fundamentally social metaphysics like a level 3⁄4 player, willing to generate sophisticated “logical” rationales where convenient, but not constraining your actions based on your story. This is what cluster thinking cashes out as, as far as I can tell.
Hmm. I still suspect I’m more fluid than is implied in these models. I think I’m mostly a mix of cluster thinking (I recognize multiple conflicting models, and shift my weights between them for private beliefs, while using a different set of weights for public beliefs (because shifting others’ beliefs is relative to my model of their current position, not absolute prediction levels—Aumann doesn’t apply to humans)), and I do recognize that I will experience only one future, which I call “objective”, and that’s pretty much rational-postmodernist level 1. I watch for #2, but I’m sure I’m sometimes susceptible (stupid biological computing substrate!).