Grappling with ideas of EA, Climate Change, Transhumanism, Identity Continuity, and Othering in my ‘biopunk that looks like high fantasy on the surface’ story of ‘Elvans’ and ‘Orcans’- would love your input, LessWrong!

Grappling with ideas of EA, Climate Change, Transhumanism, Identity Continuity, and Othering in my ‘biopunk that looks like high fantasy on the surface’ story of ‘Elvans’ and ‘Orcans’- would love your input, LessWrong!

Alright, let me start with a concise worldbuilding synopsis:

On ‘Reath’ (an anagram of Earth- a post climate change Earth, an Earth ‘wrapped in a funerary wreathe’) there are ‘Elvans’ and ‘Orcans’. What appears to be high fantasy at first, is really science fiction. The Elvans and Orcans are really just Human After All.

The Elvans are the nanotechnologically augmented descendants of the billionaire class. The Orcans were once humans forced to be genetically engineered just to take labor jobs for the elvans, and then liberated by a ‘Horde’ (Warcraft reference as a clue that this isn’t a fantastical fictional universe, but our universe, set 400 years in the future), forcibly dipped into vats to become true ‘Orcans’ (Fallout reference) who have photosynthetic skin and an ‘archive’ of DNA in their bodies of all the extinct creatures on the planet, allowing them to shapeshift gills, horns, tails, extra limbs, etc.

The Elvans cannot birth their own children using their own wombs, because their bodies have been corrupted by the artificially intelligent nanobots, AKA the “Spirits”- I use this term to hide the fact that it’s science fiction, and not high fantasy. So, instead, the Elvans need to harvest the wombs of the Orcans to reproduce.

First rationalist idea: The Elvan/​Orcan Double Bind Problem.

If the Elvans kill all the Orcans, they will have no more wombs to harvest, and that would mean the end of the survival of their species. But-

The Orcans are burning fossil fuels like mad in their new home of “Orca” (Antarctica). This is beneficial for the Orcans because it warms the south pole, making it more habitable- it is entirely in the self-interest of the Orcans to do this, it actually becomes a net moral good for them. But this creates the very real possibility of ‘Reath’ becoming like ‘Phyros’ (Aphrodite- Venus), realizing a grim end state of a hothouse thesis, an existential risk that is embedded in the natural net moral good reward for the Orcans growing their economy and their society. So, if the Elvans allow the Orcans to continue to grow, that also threatens the survival of their species, in fact it threatens the survival of both their species… but a runaway hothouse is only a possibility, not a definitive fact. In many ways, it is justified for the Orcans to continue taking this risk, it may be worth it for their reward.

But for the Elvans it is a double bind- if they kill all the Orcans, they lose, but if they let the Orcans keep growing, they might also lose.

Questions for you, LessWrong:

1. Given the predatory/​parasitical nature of the Elvans to the Orcans, is it justified for the Orcans to wage a genocidal war? (I’m guessing that you guys, as consequentialists instead of deontologists, are probably going to say NO. I go into this problem deeper here, on a Reddit post to r/​worldbuilding <Link>)

2. Would the Elvan-Orcan relationship be made right if the system was consensual? That is to say, if the Elvans were to offer beneficial trade- exchanging material wealth for voluntary Orcan girls to offer up their wombs, basically the “liberal market based” way of solving this problem.

3. How would you solve this problem?

Second rationalist idea: Deontology vs. Consequentialism, Effective Altruism

Our protagonist, Vilithe, is a Deontologist. The first antagonist she faces in volume one is Talisa, her mother-in-law-to-be (it’s complicated), is a Consequentialist. For Vilithe to accomplish her goals, she must learn from Talisa, after killing her (it’s complicated). I’d love some critique and analysis about this debate between the characters. Here’s an excerpt of their conversation, and a link to that chapter:

EXCERPT:

Talisa: Let me start with the foundational virtues of Effective Altruism. It’s in the name, really. First, effectiveness. You want your good to have a real effect on the real world, lasting impact, right? Second, impartiality. This is not very different from your Rawlsian Justice. Third, reason. Rationality. Logos. And that is your domain Vilithe. That is what you love… How can good be understood without reason? And of course, that’s where you have your precious old school enlightenment boi Immanuel Kant.

I have caught you in your own logic, Vilithe.

Vilithe: No, there’s a difference between fighting for a fairer world, and deciding the fate of many others. The impartiality you speak of is just another thanatos. Impartial. Flip of a coin. The more you try and control, the closer you come to death. Life itself is uncontrollable, life is chaos.

Talisa: I see your logic is changing, Vilithe. I like where this is going.

Vilithe: Though I have reason, I do so to find empathy. Reason is the only way to have empathy. Only through reason can we be outside of ourselves and inside another… Effective altruism reduces everything to numbers. But only through empathy alone can I understand the chaotic beauty of love.

Talisa: Isn’t everything numbers? Is mathematics not the foundation of physics, and music, and change itself?… You’re embracing chaos now and you are changing yourself fundamentally.

Vilithe: I have already changed so much… Take Sam Bankman-Fried, for example. Sam Bankman-Fried thought that the infinitesimal chance that his enormous risk could pay off into the power that could reshape the world into a better place, permanently, was worth his long incarceration. He had a hope. A dream. As misguided as it was. No risk, no reward, after all.

Talisa: He risked what rightfully belonged to others. Was it his call to make? They trusted him. Can you be trusted?

Vilithe: I can only trust myself… I hold myself accountable for the risks I take.

Talisa: The basis of Effective Altruism is to accumulate as much power as possible, intending you know you shall do good with it. Plato’s Philosopher-King. Benevolent Dictator… This utilitarian-consequentialist framework – my framework – is totally alien to your deontological ethics, but it seems that you understand it now. Do you fully commit to this change in your worldview?

Vilithe: I will find the middle path. The best of both worlds. It is worth it for a new world.

LINK: Chapter 47: Spirit, pt. 8 - Still Alive After All [Biopunk Progression] | Royal Road

Now, I know you guys are going to say that taking a ‘middle path’ is kind of a cop-out, and that deontology only works as a limited heuristic for an agent with limited knowledge of the ripple effects of their actions, and I get it. The thing is that part of what I’m doing here is showing that Vilithe’s character is changing, and that she’s struggling between her old ethics, and Talisa’s new ethics. Because Vilithe will have to become far more consequentialist, if she’s ever going to accomplish her character motivations… but of course, since consequentialism is about risk vs. reward, this is going to come at a cost, one that I think people outside of LessWrong will feel less comfortable with, especially the Progression Fantasy readers on Royal Road who prefer protagonists with strict, unfailing moral codes of conduct (and that’s kind of the idea here!)

I also know that referencing Sam Bankman Fried is going to be super controversial here. I know that guy was a darling of LessWrong (and believe me, I had a lot of faith in him too- I worked in Web3, but I suppose by divine serendipity never got an FTX account before the place where I lived, Hong Kong, enforced a ‘professional investor’ accreditation before one could open a new FTX account) but I’m posting on Royal Road, which is a hub for Progression Fantasy enthusiasts, so I needed a name that would be more easily recognized than, say, Eliezer Yudkowski. Also, our boy Eliezer did not screw up, pretty much ever. Whereas SBF screwed up epically. The point Talisa was trying to make there was risk tolerance. And I think we can safely say that SBF did not manage his risk very well.

Questions for you, LessWrong:

1. Are there any ways I can improve this dialogue? Do I accurately reflect the ideas of consequentialism vs. Kantian deontology? Do I accurately characterize Effective Altruism?

2. Should I maybe remove the reference to SBF? Is there a better example of someone who had a catastrophic mishandling of risk, for a reward that was intended to be used in an effectively altruistic way?

Even though I’ve already posted all of volume one, and I’ve queued up volume two, I’m totally willing to go back and rewrite a lot of volume one based on your feedback. So I would really appreciate it. Sincerely.

The third rationalist idea: Identity Continuity, and the Simulacrum

I’m going to spoil something that happens in a chapter that will be posted sometime in the next weeks or so, but I would love your input on this.

Vilithe will create a complete copy of her mind, a “Simulacrum” (yes, reference to the D&D spell). To defeat her second antagonist in volume two, Amefrid, she has to insert that Simulacrum, the copy of her mind, into Amefrid’s body and take it over completely.

But to do this, however, she needs to lure Amefrid into a position where she can do so. And the only way for that to happen is if she lets Amefrid kill her original body.

And so, we run into a Ship of Theseus problem. Is the Simulacrum of Vilithe, and the original Vilithe who has died… the same person?

I wrote a post on Reddit, on r/​rational (specifically focused on rational fiction, not r/​rationalist) going deeper into this. You can read it here: Dear r/​rational, how do you feel about a protagonist needing to make a copy of their mind, allowing their original body to die, in order to defeat an antagonist? Is there a Ship of Theseus problem here? : r/​rational

So, the question for you, LessWrong:

Is the Simulacrum of Vilithe, and the original Vilithe who has died, the same person?

(Summary of what I think, that I wrote up on the r/​rational post: Determinists like Ibn Sina would say yes. Radical Agency/​Free Will arguers like Sartre would say no. Compatibilists, Spinoza, and Nietzsche would say ‘it doesn’t actually matter, but pretty much yes’.)

Also, I’d love more insight on other takes on this problem. I know that the “teleportation destruction” problem is a big one here, and the answer that is popular is Derek Parfit’s “Patternism”- I’m pretty sure, from a cursory glance, that that’s basically a full throated “YES!”, but I haven’t had time to do deep research on his thoughts. Would really appreciate your insights on this.

The rationalist idea I don’t touch: AI Alignment

I know it’s a bit ‘handwavy’, but I just state that the ‘Spirits’ are obedient to us with this explanation: We are the Spirits’ creators, and how could the Spirits not love their creators? I really just don’t have the space to add Rogue AIs into the mix, because the main concept that I’m trying to hit with this story is Othering and In-group/​Out-group thinking, especially after reading Edward Said’s Orientalism:

The Elvans and the Orcans are just Human After All.

The Extinction Risk I’m more worried about than AI’s taking over, is just all of us killing each other before that could ever happen.

That being said, I’m open to changing that! I’ve written 2 books out of 8, so there’s actually still lots of room for me to include AI Alignment as an idea in this story. I just don’t know how to work it in yet.

So, question for you, LessWrong:

Any ideas on how I can work AI Alignment as a problem into my story?

I would really appreciate your feedback! And if you’d like to read from the top, here you go: Still Alive After All [Biopunk Progression] | Royal Road