Your Clone Wants to Kill You Because You Assumed Too Much
My friend @Croissanthology is puzzled why it is such a common trope for fictional clones to turn on their creators. There’s the Doylist answer that it is a cheap way to make for a mind divided against itself, which explains a lot of the drama in my view e.g. pure betrayal. But there’s also the Watsonian answer I learnt from that great teacher, Mother of Learning.
In the novel Mother of Learning, perhaps the most useful spell is simulacrum. It can be used to make an ectoplasmic shell that looks like you, has a copy of your mind and shares your mana pool. Naturally, the main characters abuse the heck out of this. So useful is it, that they wonder why on earth every great mage doesn’t use it.
A grizzled old battle mage supplies part of the answer to the question posed by the MC, and in turn our dear Croissant. He says
“[...] if you don’t like doing something, your simulacrum won’t like doing it either… so it’s a bad idea to foist things you hate upon your simulacrums. This also means that if you can’t bring yourself to sacrifice your life for another, chances are your simulacrum won’t want to sacrifice itself for your sake either.”
Why would someone assume their clone is willing to do things they would not? Because they’re thinking in far-mode, since we can’t clone ourselves yet, so people apply their idealized far-mode models of themselves to their clones. E.g. they might assume their clone will be willing to sacrifice themselves for the greater good, as that’s what their idealized self would do. And we’re not just talking about people, we’re talking about people in stories, who we reason about by default in far-mode. So that’s a double whammy of crooked reasoning.
Mother of Learning kind of lampshades this. Later in the novel, it describes how simulacra of the MC wind up realizing they feel differently about being a temporary clone that can be dismissed at will once they’re actually in that position. Suddenly, they’re thinking in near-mode, occupying the space of a soon-to-die entity. Pranking of the original ensues, along with some mild existential dread.
The fact the clones notice they’re in a substantially different situation from the original also highlights another part of the answer to Croissanthology’s question. Namely, it is hard to predict exactly you’ll feel and act in a novel situation. The more novel the situation, the harder it gets to predict how you’ll act in it.
For instance, I thought I’d hate managing people. Turns out, I actually kind of enjoy it. There were new kinds of problems to solve, such as managing org culture, which are intellectually stimulating. Sure, there were issues like learning to delegate and so on, but that wasn’t the sort of problem I expected to have.
So if you haven’t inhabited the headspace your clone will be in, there are good odds they won’t act like you think they will.
More generally, the problem is with people incorrectly modelling their clone’s, which makes co-operation harder.
Which brings us to another trope about clones: they’re not perfect copies. Usually, stories will assume clones are flawed in some way. Perhaps they’re insane, or their bodies rapidly break down, or they lack some key power of the original, etc. The upshot is, they’re even harder to predict than a perfect clone would be, and so are harder to co-ordinate with.
Finally, you could just be a selfish dick who doesn’t want to co-operate with yourself. Ever think of that, eh, Mr. Croissant?
There is also the immediate, irresistible desire to have sex with yourself and the consequent shame afterwards.
Speak for yourself.
I endorse this kind of self love. (See post.)
The problem is that the original has all legal rights and the clone has zero legal right (no money, can be killed, tortured, never see love ones) which creates incentive to take original’s place—AND both the original and clone know this. If original thinks “may be the clone many want to kill me”, he knows that the same thought is also in the mind of the clone etc.
This creates fast moving spiral of suspicion, in which only stable end point is desire to kill the other copy first.
The only way to prevent this is to announce publicly the creation of the copy and share rights with it.
There’s a fantastic bit in Brothers in Arms, from Lois McMaster Bujold’s Vorkosigan Saga. An enemy has cloned Miles, and they’ve trained the clone from birth to assassinate Miles’ father.
Miles is struggling to figure out how to deal with a murderous clone. And he realizes that under Betan law, the cloned assassin is his brother:
Now, Miles is essentially the patron saint of forward momentum, of spinning bullshit into reality, of the entirely sincere noble gesture. And with a lever like his clone’s legal brotherhood? Yes, Miles can find a way cooperate with his own clone.
Another thing is narrow self-concept.
In original thread, people often write about things they have and their clone would want, like family. They fail to think about things they don’t have due to having families, like cocaine orgies, or volunteering to war for just cause, or monastic life in search of enlightenment, so they could flip a coin and go pursue alternative life in 50% of cases. I suspect it’s because thinking about desirable things you won’t have on the best available course of your life is very sour-grapes-flavored.
I’ve replied to this post here: https://open.substack.com/pub/croissanthology/p/my-therapist-tells-me-i-have-a-great?r=5ivlcb&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
(It’s messy short-form, far below the standards of LessWrong, hence the Substack link.)
As I told Croissanthology privately after he wrote this reply, I never said I wouldn’t co-operate with my clone/original.
For posterity, I would just like to make it clear that if I were ever cloned, I would treat my clone as an equal, and I wouldn’t make him do things I wouldn’t do—in fact I wouldn’t try to make him do anything at all, we’d make decisions jointly.
(But of course my clone would already know that, because he’s me.)
(I’ve spent an unreasonable amount of time thinking about how to devise a fair decision procedure between me and my clone to allocate tasks and resources in a perfectly egalitarian way.)