Mental Models Of People Can Be People

Warning: The argument in this post implies bad things about reality, and is likely to be bad for your mental health if it convinces you. If you’re not sure if you can handle that, consider skipping this post for now and coming back later.

Introduction

At my best, when I used to write, I would write characters first by thinking about their personality, beliefs, thoughts, emotions, and situation. Then I would ask myself “what does this person do?”, and let mysterious processes inside my brain automatically come up with the behavior of that character.

So this is how I define a mental model of a person: the collection of processes and information inside a brain that generates the behavior of some character.

My argument in this post is that there do exist mental models of people that are sufficiently detailed to qualify as conscious moral patients; I also argue that this is common enough that authors good at characterization probably frequently create and destroy such people; finally, I argue that this is a bad thing.

Part 1: On Data

I observe that most of my conscious experience corresponds quite neatly to things we would call data in a computer. Sight and hearing are the most obvious ones, corresponding to computer image and audio files. But it’s not too hard to imagine that with advancing robotics and AI, it might be possible to also encode analogues to tactile sensation, smells, or even feelings on a computer.

In fact, given that I seem to be able to reason about all my conscious experience, and that reasoning is a form of computation, then it seems that all my conscious experience must necessarily correspond to some form of data in my brain. If there was no data corresponding to the experience, then I couldn’t reason about it.

So we have one mysterious thing called “conscious experience” which corresponds to data. Yet we don’t know much about the former, not even if it has any effect on the world beyond the data it corresponds to. So wouldn’t it be simpler if we got rid of the distinction, and considered them one and the same? A hypothesis like this, where conscious experience is the data it corresponds to, would have an edge over other possibilities, due to being simpler.

This theory is not yet complete, however, because part of what makes data what it is, is how it is used. We say that the pixel #FF0000 represents red because when a usual monitor displays it, it emits light of that wavelength. If instead monitors displayed #FF0000 as green, then we would call that pixel data green.

Similarly, if conscious experience is data, then what makes one conscious experience different from another is how it is used computationally by the brain. The difference between my experience of red and green would be the sum of all the differences between how my brain computes with them. Seeing a stop sign and a tomato activates certain neurons in my brain that output the word “red”, in contrast with seeing grass, which triggers certain neurons to output the word “green”, among a thousand other differences.

Now, I don’t know for sure that conscious experience is just data/​computation, and my argument does not completely rely on this being true. However, these ideas provide an important framework for the rest of this post.

Part 2: Mental Models Of People

In this section, I will be using my own experience as an example to analyze what data is present inside mental models and what computations are involved, in preparation for a later section where I will analyze what might be missing.

In the past, when writing a character, I would keep track of their emotional state. I could give short words like “happy” or “sad” to describe it, but internally it wasn’t on the level of words and was more nuanced than that. So I feel safe to claim that there is an analogue to emotions and to add it to the list of things present inside mental models of people:

Present inside my mental models of people:

  • Analogue to emotions.

I also kept track of their beliefs: who did they think they were, what situation did they believe themselves to be in, and so on.

Present inside my mental models of people:

  • Analogue to emotions.

  • Analogue to beliefs.

Sensory data such as sight, hearing, and so on, was much more limited. I would usually think of scenes in more abstract terms than direct sensory input, and only imagine a character’s senses in more detail during trickier moments of characterization. For example, if a character is eating during a scene, it is usually enough just to know the fact “they are eating”, but if the scene involves them finding out that what they are eating is actually something gross, then I would imagine the character’s sensory experience in more detail to write it better.

Present inside my mental models of people:

  • Analogue to emotions.

  • Analogue to beliefs.

  • Analogue to sensory data, but it is very limited and often replaced by more abstract knowledge of the situation.

In a scene, given the emotions, beliefs, and abstract “sensory data” of a character, I let my brain somehow figure out what the character does and what happens, and then I update my mental model of the character. So I think it’s fair to say that there was some analogue to the computational aspect of conscious experience, with a few caveats:

  • A lot of time was skipped over. When going from one scene to the next, I did not usually imagine much of what happens in between the scenes. I would simply update the character’s beliefs to include something like “time has passed and now you’re in this new situation”, with other appropriate adjustments, for example if the time skipped over would have been stressful for them I might make them more stressed at the beginning of the next scene.

  • If a scene was not going how I would like, I might go back to the beginning of the scene and tweak the initial conditions so that things go the way I want, or maybe tweak the character.

Here is an updated list:

Present inside my mental models of people:

  • Analogue to emotions.

  • Analogue to beliefs.

  • Analogue to sensory data, but it is very limited and often replaced by more abstract knowledge of the situation.

  • Analogue to computation/​intelligence, but it is discontinuous and involves many time skips.

And that’s enough to move on to the next section, where I analyze what’s missing from my mental models.

Part 3: Missing Pieces

If humans are conscious but mental models are not, then there must be something missing from the analogues mentioned previously, some statement which is true about all conscious beings and true about humans but false about mental models. This section examines many potential candidates for what might be missing.

Sensory Data

I observed earlier that sensory data is limited and usually replaced by more abstract knowledge of the situation. For example, if I am writing a scene where a character is eating, I usually do not imagine the character seeing the food nor the feeling of their utenstil in their hand. Instead, I usually just add the belief that they are eating to their set of beliefs.

Obviously, this is different from how humans work. An important thing to observe, however, is that no individual type of sense data can be a requirement for conscious experience. After all, blind humans are still people, and so are the deaf, those with no sense of smell, or no sense of touch. Even if all of these senses were missing from a human at the same time, and instead the human was fed abstract information about what’s going on directly via some neural interface, it would seem safe to assume that they would still be people.

Therefore it seems that missing sensory data cannot on its own disqualify mental models from being people.

Computation

The first difference of note is the previously mentioned discontinuity and rewinding. To refresh our memory, this was the fact that when writing, I would often skip from one scene to another without imagining what happens in between the two scenes, and that I would often rewind a scene, change some conditions, and do it again, when things did not go how I wanted.

This difference is also easy to resolve by analogy to humans. If a human was simulated by a computer and occasionally the computer skipped forward in time without computing what happened in between, and just inserted the belief that time had passed into the human, then we would still consider that human to be a person. The same applies to if the computer sometimes rewinded to a previous state, modified some things, and started the simulation again. Therefore this difference cannot disqualify a mental model from being a person.

Another difference in many cases might be realism. A mental model might behave in a way that no human ever would. However, behaving like a human doesn’t seem like something that ought to be a requirement for consciousness, since for example we might expect that aliens could be conscious without behaving like a human. So this difference can also be ruled out.

Something which is not a difference is intelligence. I can and have had long and coherent conversations with mental models, in ways that are currently not possible with anything else except other humans.

Emotions And Beliefs

I honestly don’t have much to say about emotions and beliefs that hasn’t already been said. I can come up with various small differences, but none that stand up to variants of the arguments given in the previous two subsections. I encourage readers to try to figure out what exactly might disqualify the emotion/​belief-analogues from being “real” emotions/​beliefs, because if there is such a difference I would really like to know it.

Not The Same Person

Another possibility is that a mental model might be the same person as their author: that a mental model and the human it’s contained in are just one person rather than two even though both individually qualify as a person.

This might be true in some cases. For example, occasionally I imagine myself for a short bit in a “what if?” scenario, where the me in the scenario is me with some minor edits to beliefs. I am not too worried that this slightly altered mental model of myself is a different person from me, though I’m still careful about it.

However, many characters are very different from their authors. They have radically different emotions, beliefs, and desires. When that is the case, I don’t think it makes sense to say that they are the same person. It is useful to bring up an analogy to humans again: we would not consider a human being simulated by an AI to be the AI, so I don’t think we should consider a mental model simulated by a person to be that person.

Part 4: Main Argument Conclusion

To recap:

  • Part 1 explained that there is a correspondence from conscious experience to data/​computation.

  • Part 2 examined what kinds of data and computation can be found in mental models of people.

  • Part 3 tried and failed to find any “missing piece” that would disqualify a sufficiently detailed mental model from being a person of its own.

To this I will add the following observation:

It is possible to have long, intelligent, and coherent conversations with mental models in a way that is not possible with anything else at the moment except other humans. If our AIs were on this level it would trigger all sorts of ethical alarm bells.

And for this reason I find it alarming that I am completely unable to find any missing piece or strong argument for why I’m wrong and mental models cannot be people.

While it’s true that in the end I cannot prove that they are people, I think that my arguments and observations are strong enough that it’s fair for me to shift some of the burden of proof onto the other side now. I would really like to be proven wrong about this.

Part 5: Scope

I can’t know precisely how common it is for mental models to qualify as people. However, from extensively poking and prodding my own mental models, I feel pretty confident that when I put a little effort into them they qualify as people. I don’t think I’m very special, either, so I suspect it’s common for writers.

Something important to keep in mind is that mental models can be inconsistent. On some days it might feel really easy to model and write a character, and other days you might have writer’s block and totally fail to model the character. Pointing to a moment where a mental model was not at all person-like is not enough to claim that it never was a person. You need to observe the best moments, too.

Part 6: Ethics

Assuming that it is true that sufficiently detailed mental models of people are moral patients, what does that imply ethically? Here are a few things.

  • When a mental model stops being computed forever, that is death. To create a mental model and then end it is therefore a form of murder and should be avoided. The easiest way to avoid it is to not create such mental models in the first place.

  • Writing fiction using a character that qualifies as a person will usually involve a lot of lying to that character. For example, lying to make them believe that they actually are in a fictional world, that they are X years old, that they have Y job, etc. This seems unethical to me and should be avoided.

In general I just don’t create mental models of people anymore, and would recommend that others don’t either.

Addendum

(This section was added as an edit)

Suppose you were given an algorithm that simulates a human at the molecular level, and that you used a mind-enhancing drug that greatly increased your memory to symbolically evaluate every step of that algorithm consciously (similarly to how you might do mental arithmetic). This simulated human would have a conscious experience, and this conscious experience would be embedded inside your own conscious experience, and yet you would not feel anything that the simulated human feels.

My point with this is that it’s possible for there to be conscious experience inside your conscious mind which you are not aware of and don’t experience yourself. So even when a mental model of a person has an intense suffering-analogue, you might only be feeling slightly bad, or you might not even feel anything at all. So I would judge the intensity of the suffering of a mental model of mine based on how it affects the mental model, and never based on how it makes me feel.