A Crisper Explanation of Simulacrum Levels

I’ve read the previous work on Simulacrum Levels, and I’ve seen people express some confusion regarding how they work. I’d had some of those confusions myself when I first encountered the concept, and I think they were caused by insufficiently crisp definitions.

The extant explanations didn’t seem like they offered a proper bottom-up/​fundamentals-first mechanism for how simulacrum levels come to exist. Why do they have the specific features and quirks that they have, and not any others? Why is the form that’s being ascribed to them the inevitable form that they take, rather than arbitrary? Why can’t Level 4 agents help but act psychopathic? Why is there no Level 5?

I’d eventually formed a novel-seeming model of how they work, and it now occurs to me that it may be useful for others as well (though I’d formed it years ago).

It aims to preserve all the important features of @Zvi’s definitions while explicating them by fitting a proper gears-level mechanistic explanation to them. I think there are some marginal differences regarding where I draw the boundaries, but it should still essentially agree with Zvi’s.


Groundwork

In some contexts, recursion levels become effectively indistinguishable past recursion level 3. Not exactly a new idea, but it’s central to my model, so I’ll include an example for completeness’ sake.

Consider the case of cognition.

  1. Cognition is thinking about external objects and processes. “This restaurant is too cramped.”

  2. Metacognition is building your model of your own thinking. What biases it might have, how to reason about object-level topics better. “I feel that this restaurant is too cramped because I dislike large groups of people.”

  3. Meta-metacognittion is analysing your model of yourself: whether you’re inclined to embellish or cover up certain parts of your personality, etc. “I’m telling myself the story about disliking large groups of people because it feels like a more glamorous explanation for disliking this restaurant than the real one. I dislike it out of contrariness: there are many people here because it’s popular, and I instinctively dislike things that are mainstream.”

  4. Meta-meta-metacognition would, then, be “thinking about your analyses of your self-centered biases”. But that’s just meta-metacognition again: analysing how you’re inclined to see yourself. “I’m engaging in complicated thinking about the way I think about myself because I want to maintain the self-image of a clever, self-aware person.”

    • There is a similar case for meta-metacognition being the same thing as metacognition, but I think there’s a slight difference between levels 2 and 3 that isn’t apparent between 3 and 4 onward.[1]

Next: In basically any society, there are three distinct “frameworks” one operates with: physical reality, other people, and the social reality. Each subsequent framework contains a recursive model of the previous one:

  1. The physical reality is.

  2. People contain their own models of reality.

  3. People’s social images are other people’s models of a person: i. e., models of models of reality.[2]

Recursion levels 1, 2, and 3. There’s no meaningful “level 4” here: “a model of a person’s social image” means “the perception of a person’s appearance”, which is still just “a person’s appearance”. You can get into some caveats here, but it doesn’t change much[3].

Any signal is thus viewed in each of these frameworks, giving rise to three kinds of meaning any signal can communicate:

  1. What it literally says: viewed in the context of the physical reality.

  2. What you think the speaker is trying to convince you of, and why: viewed in the context of your model of the speaker.

  3. How it affects your and the speaker’s social images: viewed in the context of your model of others’ model of you and the speaker.

So far, that’s fine: all of these layers of communication are useful and have a place in any functioning society.

Problems start when the society starts discarding lower layers of communication. This is what moving to a higher Simulacrum Level means: discarding a lower level in favour of a higher one. A “pure” Level 1 society only cares about statements’ literal truth; Level 2 society cares about the people behind the statements; Level 3 society cares about how statements influence people’s perceptions of other people. Levels 0 and 4 are special: Level 0 doesn’t have a concept of communication, and Level 4 has gone so deep into recursion it has abandoned it.

I’ll be using the terms “society” and “agent” because that’s how I think about this model, but I mean that in a broad sense. An “agent” could be anything from a person to a nation, and “society” could be any group of people embedded in any context (including a wider society).

Additionally, I should note that the same agent could be occupying different Simulacrum Levels depending on what context they’re in or what people they’re with (a person could be perfectly nice and human to their family, but be a psychopath-like Level 4 when it comes to anything tangentially politics-oriented).


Level 0

Understanding it is crucial for understanding Level 4.

For Level 0 agents, there are no symbols, no models, and no communication, only actions aimed at directly introducing changes upon the physical world. Inasmuch as it concerns interactions with other agents, it is a level of pure conflict.

You see a lion, you run away. You see food, you take it. You meet an enemy, you kill them. Your actions are asymbolic: they don’t stand for anything, there’s no expectation that they’ll be seen and understood by someone else. Their purpose is inherent in their form.

When you’re trying to sharpen a rock to put it on a spear, you’re not trying to persuade the rock to change its form. When you’re writing a computer program, you’re not trying to convince the computer to work (as much as it could feel like it sometimes).

But it doesn’t only concern inanimate objects. You can have a “model” of someone here — a hunter tracking its prey — but the crucial thing is that you don’t assume they have a model of you. Any “communication” that can happen here is one-sided. If you see a bear running at you and you flee its territory, it could be said that the bear “intimidated” you. But the bear didn’t think about “intimidation”. Its actions weren’t taken with the aim to signal its willingness to kill you in order to cause you to retreat. It meant to directly remove you from its territory, by killing you, and your willing retreat from it was, to it, a happy coincidence.[4]

It doesn’t, per se, mean that Level 0 agents can’t have a broad range of motivations, that they’re only pursuing their own self-interest. However, it’s most common. After all, operating at Level 0 requires either refusal or inability to connect with other agents/​people, on a prolonged basis. And if you expect to only ever interact with someone else by unilaterally forcing changes upon them, or receiving the same in return, what sort of relationship could you have except an irreconcilably hostile one?

(And what if all tools at your disposal are Level 0 tools? If there’s no way for you to have a conversation with anyone, if you don’t even have the concept of a “conversation”, if you could only force changes upon the world? Spoiler alert: That’s how Level 4 looks from the inside.)

Level 0 is what truth is. Higher levels could easily collapse to it: all-out fights and wars.


Level 1

Well, I won’t get into the development of language and cooperation here. Level 1 is the level at which agents exchange information about the physical world in order to develop a correct shared understanding of it.

Statements exchanged at this level are scrutinized solely for their truth value, they’re literal and overt. Inasmuch as it concerns communication with other agents, it’s a level of pure cooperation. That cooperation can sometimes take the form of “demonstrate your (non-exaggerated) military might to the other tribe so they allow you to take all the resources without a fight”, but it’s still aimed at achieving an outcome that is mutually preferable to the alternative. (There can be Level-0 hostility, and hostile refusals to communicate. If you hate the other tribe enough, you don’t give them chance to concede, you just massacre them all. But inasmuch as communication does happen, it’s always prosocial.)

At Level 1, agents and societies are concerned with the physical world, and they’re focused on building as accurate a model of it as possible. They have no attachment to that model, and would change it to better fit the world as needed.

A conflict between two agents about a Level 1 issue would be had over their conflicting models of the physical world, and it would be resolved by testing which one is correct, or resolving a miscommunication.


Level 2

But sometimes it’s impossible to cleanly resolve a Level 1 conflict, because the two agents lack the ability to cleanly test which of them is right, and their priors (or motivated reasoning) cause them to prioritize different models. Or maybe they have different values altogether. In that case, one of them may lie: knowingly emit a statement that doesn’t correctly describe the physical world in order to warp their interlocutor’s model of the world and compel them to act in line with the first one’s interests.

That isn’t necessarily malicious, of course: they might lie for the other’s “own good”, perhaps because they think their interlocutor is biased. They might even be right! (To borrow Zvi’s example, claiming that there’s a lion on the other side of the river when there’s actually a tiger, because the tribe doesn’t fear tigers enough.)

This is the “classical” explanation of Level 2: the level where people discover deception, and start warping others’ models for their own ends. And that’s absolutely part of it. But only part.

Neither of the interlocutors actually has to lie. Agreeing to disagree would be enough.

In the aftermath, there’ll be two conflicting models of the world (let’s call them “worldviews”) present in the society. Say, an expansionist vs. an isolationist agendas. Soon, they will be joined by a dozen more, born of a dozen more unresolved disagreements or deceptions. Some of these disagreements will be resolved, but others will stick around for the long term.

People would choose between these models based on the models’ quality, their individual biases, and their preferences, forming into sub-groups. People in each sub-group would try to get more people on their side and to prevent people from leaving their sub-group, because it’ll benefit them if their worldview dominates (either because they genuinely believe it’s truer than the other models, or because it’s beneficial to them specifically).

They’ll get entrenched, they’ll cultivate a loyalty to their cause or worldview, they’ll foster certainty in what they believe. And eventually, some (most) will develop an attachment to their model of the physical world. They will start valuing it inherently, not only inasmuch as it is correct. They will start denying the evidence against it, which will get easier the more accurate (and/​or difficult to challenge) their model is.

They will start to confuse their model of reality for reality itself.

In a late-stage Level 2 society, people would consider the most important thing what worldviews other people subscribe to and, generally, what beliefs and motives other people have. Statements would be scrutinized not for their truth value, but for the motivation and the beliefs behind them: why the statement’s author chose to make it, what worldview they’re trying to spread, or what they could be convinced of. At that point, statements’ actual content would be considered less important than what they reveal about the speaker and the speaker’s other beliefs.

The creation of new models or drastic changes to old ones would be resisted.

What’s actually happening in the physical world would become less important, compared to what others could be convinced is happening.

A Level 2 conflict between two agents is a mutual attempt at manipulation. Both would be aiming to persuade their audience into buying into their model of the world, whether that audience be their interlocutor, or a literal audience watching them. Any underhanded psychological trick is fair game there.

The members of a society fully on Level 2 would (1) focus on developing as accurate a model of the rest of the society as possible, plus (2) treat their personal model of the physical world as essentially true, plus (3) consider the actual physical world irrelevant, to be lied about or dismissed as needed.


Level 3

A Level 2 society is a society divided into sub-groups scrutinizing people’s statements for these people’s genuine compatibility with their worldview. Some of these groups are more powerful than others, or appeal to different preferences, and you could get into one’s good graces by signalling the correct things about yourself by emitting the correct statements. You could also hurt someone you hate by convincing people they’re holding an unpopular worldview, or don’t belong to a group they want to be part of.

And it doesn’t matter what you or they are actually like: as long as it looks like someone is the kind of person that belongs to a group, or invites hate, or has another type of relationship with some sort of cause, they actually do. It is purely a matter of appearances.

  • Level 1 society is grounded in the objective truth of the physical world.

  • Level 2 society is grounded in the objective truth of its people’s beliefs. Even if those beliefs are warped, even if you want to warp them, you still care about their actual genuine internal epistemic states.

  • Level 3 society invents its own reality out of whole cloth. It only cares about how things look like, what beliefs a person looks like they have. But it doesn’t acknowledge this.

In a Level 3 society, statements are still scrutinized for the implications they create about their speaker, or about other people, like on Level 2 (because the recursion already hit its maximum on that front; more on that in the next section). But it no longer matters if your interpretation of these implications is correct, only that it’s good enough that others could plausibly claim to believe it, or at least believe that you believe it.

Doesn’t matter if you genuinely believe in the Cause. Only that your social profile fits with the profiles of those who are part of the movement centered around the Cause.

This creates some perverse incentives, which Zvi calls the “war against knowledge”:

At level 3, the following two things are blameworthy, creating two ways in which knowledge is a liability. [...]

One blameworthy thing is not invoking the right symbols.

This is the “composed of things that can be used for something else” aspect. Caring about what is true creates an alternative incentive that prevents one from invoking the proper symbols, and casts doubt on whether those symbols mean what they seem to mean.

Invoking symbols that are technically false rather than those that are technically true is, if anything, a stronger move in the game. This is why. It signals more strongly one’s costly sending of the appropriate signals, without room for misinterpretation as a lower level action. By repeating the lie, we show ourselves loyal. By getting others to repeat it, we drive them towards being and identifying as loyal, and get them to show others they are loyal, and demonstrate our power over both people and symbols.

The other blameworthy thing is knowing that what you say is false. What is blameworthy is knowledge itself.

(Or, perhaps more precisely, other people knowing that you know what you say is false, and thus anyone else’s knowledge of our having knowledge, as opposed to knowledge itself, but that’s also true of any other blame system.)

What did the President know, and when did he know it?

Thus, the shift in communication from explicit to implicit. The focus on having only deniable, tacit knowledge.

The follower who needs explicit instruction is a poor follower indeed. Specifying everything to be done is impractical, and makes it clear you have not only knowledge but responsibility. Much better to work towards the goals of the group, to pile on symbols that help win the game.

Thus does this structure drive everyone away from knowledge. The easiest way, by far, to pretend not to know things is to not know them.


Levels of Recursion

As we climb up the levels, there are three important variables to keep track of: what is the aspect of the world the truth of which is considered important, what kind of framework is being formed/​manipulated in order to model the important aspect, and what is the aspect of the world the truth of which is considered irrelevant.

Here’s that in table format:

LImportantFrameworkIrrelevantComment
0physical worldModels don’t exist. Everything is real.
1physical worldpeople’s worldviewsHow does the world work?
2people’s worldviewspeople’s social imagesphysical worldWhat do people believe? Why? How could it be used?
3people’s social imagespeople’s social imagespeople’s worldviewsOnly the images people project matter.
4people’s social imagespeople’s social imagespeople’s social images???

All three distinct values represent different levels of recursion:

  1. The physical world is.

  2. People’s worldviews are models of the physical world.

  3. People’s social images are collective models of people and their worldviews: models of models of the world.

  4. The next one would be “a model of a person’s social image”, i. e. “the perception of a person’s appearance”, but that’s still just “a person’s appearance”.

At Level 3, the third variable tries to go past recursion level 3, and therefore wraps around on itself. This causes the peculiar situation with “writing its own reality”: level 3 creates the thing it cares about.

But because the truth of people’s images isn’t yet considered irrelevant, there’s still some connection to reality, where sufficient evidence against a claim could overturn it. There’s still an understanding that models stand for something, that they’re meant to represent the truth of some immutable reality.

Level 3 agents genuinely care about how the world perceives them and others. How accurately they’ve signaled their allegiances. If someone provides evidence that someone had emitted a signal, at some point, that goes against the public image they’re now trying to project, Level 3 agents would care about that. (Hence e. g. cancel culture.)

Level 4 moves past even this. (I’ve seen Level 4 described as “everything past Level 3”, and I guess this is me formalizing that.)


Level 4

Picture a late-stage Level 3 society. The population of Level 2 agents dies out or adopts Level 3 thought patterns. At that point, no-one cares about what’s actually happening anywhere or with anyone, or even what people could be convinced is happening, only what people think they could plausibly claim to believe is happening.

The switch to Level 4 happens once this state of affairs becomes shared knowledge. Once everyone knows that there’s no audience they have to legitimately convince with their performances, only fellow L4 agents tracking the effects of every utterance on the status game being played. An once they further know that others know this too...

This has a tricky implication: symbols become asymbolic.

A symbol is something which represents something else. But Level 4 statements are not correlated with reality in any way whatsoever: they’re entirely void of meaning. They communicate no information, not on any level, neither literally nor by what implications they create. They don’t stand-in for anything. At Level 4, symbols stop symbolizing things. They become self-sufficient.

And everyone knows this, so no-one is trying to interpret them. And everyone knows that too, so no-one makes statements with the expectation that they’ll be scrutinized for information. The only reason anyone makes a statement, then, is because they know it will have a specific effect on the local social context: open up certain avenues of attack or defense. That is, their purpose is inherent in their form.

I think similarities with Level 0 are clear. Level 4 agents can’t talk. Their utterances warp reality. They can’t say things, they can only force others to occupy different social contexts. And they (think that) they get the same in return: no-one engages with them, the others only ever try to forcibly transform the sociopolitical landscapes around them.

In a very real sense, Level 4 statements are moves, manoeuvres of attack or defense, not unlike physical blows and evasions.

For that reason, they’re also necessarily short-term. From the perspective of the lower levels, Level 4 agents still look like they’re weaving different narratives about the physical reality. But these narratives’ sole purpose is to win whatever immediate conflict their creator is engaged in right now. They need not be robust enough to survive past that conflict, or be consistent with each other, or even look coherent to anyone outside that conflict.

Unlike on Level 3, providing evidence that someone had emitted an incorrect signal at some point won’t move people, unless you manage to correctly stage the reveal of this information as an attack.

Level 4 agents don’t care about the world, or what others believe, or how the world perceives them. The only care about the asymbolic routes through the sociopolitical landscape they can pseudo-physically walk through.

Level 4 is as much a Hobbesian hell as Level 0. Level 4 agents don’t have the language for cooperative interactions with others. There could be, in theory, a broad range of motivations here too, but it’ll all be warped through these lens.


Level 5

Trying to extrapolate in a way that sidesteps the recursion model (which is already looping across the board), Level 5 would be a framework that is to Level 4 what Level 1 is to Level 0. It all began with an escape from the asymbolic hell, after all.

But how can you re-discover communication after you’ve turned the very concept of a signal into a knife to stab people with?

There is no Level 5.

  1. ^

    Perhaps a more fine-grained way to put it is that, in some contexts, as recursion levels rise, the differences between them shrink, and due to the human mind’s limitations, L3 vs L4+ is where our models become coarse enough that the differences are imperceptible.

  2. ^

    Which neatly fits with my view that agents are approximate causal mirrors.

  3. ^

    E. g., some policy proposals are considered unpopular not because the majority of people are actually against them, but because the majority thinks that the majority is against them, and if you worry about that, you’ll have technically went to recursion level 4...

    But as per footnote 1, empirically this doesn’t seem to happen much, likely due to the human mind’s limitations.

  4. ^

    Well, okay, that may not actually be an accurate description with regards to literal animals. They’re not all on Level 0, they can communicate with each other (although I think in this specific example, the line is blurred). Alternatively, you can imagine a non-sapient robot in the bear’s place, following some algorithms for patrolling its territory that tell it to kill intruders.