North Sentinelese Post-Singularity
Many people don’t want to live in a crazy sci-fi world, and I predict I will be one of them.
People in the past have mourned technological transformation, and they saw less in their life than I will in mine.[1]
It’s notoriously difficult to describe a sci-fi utopia which doesn’t sound unappealing to almost everyone.[2]
I have plans and goals which would be disrupted by the sci-fi stuff.[3]
In short: I want to live an ordinary life — mundane, normal, common, familiar — in my biological body on Earth in physical reality. I’m not cool being killed even if I learn that, orbiting a distant black hole 10T years in the future, is a server running a simulation of my brain in a high-welfare state.
Maybe we have something like a “Right to Normalcy” — not a legal right, but a moral right. The kind of right that means we shouldn’t airdrop iphones on North Sentinel Island.
And that reminds me—what do we actually do with the North Sentinelese? Do we upgrade them into robot gods, or do they continue their lives? How long do we sentinelize them? As long as we think they would’ve survived by themselves? Or until the lasts stars fizzle out in 100 trillion years. I don’t know.
This “Right to Normalcy” might demand something like Stratified Utopia. TLDR: If you want to do normal stuff then you can stay on Earth; if you want to do galaxy-brained stuff then wait until you reach the distant stars.
Of course, there’s no way for everything to remain “normal” — it’s normal to have 2% economic growth but 2% economic growth, plus a few centuries, will make the world very not-normal.[4] No one said my values must be logically coherent. I’m not sure how to resolve this, but maybe “I know it when I see it” suffices to demarcate normal.
That said, I’m not sure I want to be sentinelized. It’s kinda undignified. But it’s probably the best tradeoff between my mundane values and exotic values.
I’m not sure.
- ^
Before we began living outside of history like Cowen says, we experienced an era of fast technological progress. I imagine the people around the year 1900, realizing that cars and radios and airplanes were about to change the world forever, or those a few decades prior, when progress meant the telephone and electricity and railroads. For the most part they must have been okay with it — a new chapter was beginning in the history of the human race, and that was good and momentous — and yet I assume that many couldn’t help feeling pre-nostalgic. A better world was coming, which means they had to mourn the old one.
— Pre-Nostalgia in the Late Pre-AI Era (Étienne Fortier-Dubois, March 30th 2023)
- ^
I think this points to a kind of paradox at the heart of trying to lay out a utopian vision. You can emphasize the abstract idea of choice, but then your utopia will feel very non-evocative and hard to picture. Or you can try to be more specific, concrete and visualizable. But then the vision risks feeling dull, homogeneous and alien.
— Why Describing Utopia Goes Badly (Holden Karnofsky Dec 7th 2021) - ^
There’s this common plan people have for their lives. They go to school, get a job, have kids, retire, and then they die. But that plan is no longer valid. Those who are in one stage of their life plan will likely not witness the next stage in a world similar to our own. Everyone’s life plans are about to be derailed.
This prospect can be terrifying or comforting depending on which stage of life someone is at, and depending on whether superintelligence will cause human extinction. For the retirees, maybe it feels amazing to have a chance to be young again. I wonder how middle schoolers and high schoolers would feel if they learned that the career they’ve been preparing for won’t even exist by the time they would have graduated college.
— Mourning a life without AI (Nikola Jurkovic, Nov 8th 2025)
- ^
Let’s try some numbers. Today we have about ten billion people with an average income about twenty times subsistence level, and the world economy doubles roughly every fifteen years. If that growth rate continued for ten thousand years the total growth factor would be 10e+200.
There are roughly 10e+57 atoms in our solar system, and about 10e+70 atoms in our galaxy, which holds most of the mass within a million light years. So even if we had access to all the matter within a million light years, to grow by a factor of 10e+200, each atom would on average have to support an economy equivalent to 10e+140 people at today’s standard of living, or one person with a standard of living 10e+140 times higher, or some mix of these.
— Limits To Growth (Robin Hanson, Sep 22nd 2009)
I completely agree that everyone has a right to normalcy.
However, “The Sentinelese” and “Sentinelese Culture” are not moral patients, in my opinion. The individuals are. Each person has the right to choose for themselves how they would live, given adequate information.
I believe that holds true under current capabilities also, but there’s no way to give each Sentinelese person a choice like that without immediate disruption that is likely to make their lives worse.
If future technological capabilities allow wider society to give each Sentinelese person an informed choice of how they should live without likely bad effects, that is what I consider the best path.
My best bet for what we should do with the North Sentinelese—and with everyone post-singularity—is that we uplift them if we think they would “ideally” want that. And “ideally” is in scare quotes because no one knows what that means.
I suppose you’re on the money with distaste for other’s utopias, because I think the idea of allowing people to choose choices that destroy most of their future value (without some sort of consultation) is a terrible thing. Our brains and culture are not build to grasp the size of choices like “choosing to live to 80 instead of living forever” or “choosing a right to boredom vs. an optimized experience machine”. Old cultural values that death brings meaning to life or that the pain of suffering is intrinsically meaningfully will have no instrumental purpose in the future, so it seems harsh to let them continue to guide so many people’s lives.
Without some new education/philosophy/culture around these choices, many people will either be ignorant of their options or have preferences that make them much worse off. You shouldn’t just give the sentinelese the option of immortality, but provide some sort of education that makes the consequences of their choices clear beforehand.
This is a very difficult problem. I’m not a strict utilitarian, so I wouldn’t support forcing everyone to become hedonium—personal preferences should still matter. But it’s also clear that extrapolating our current preferences leaves a lot of value on the table, relative to how sublime life could be.
The way I’ve thought about this is that a just utopia should have an “opt-out” button: in order to ensure that no one is strictly worse off, we should (to the best of our ability) preserve their right to still experience their old life.
A corollary is that I think we should allow Christian homeschools to exist in the year 3000.
yeah I think we should allow christian homeschools to exist in the year 3000.
But this cuts against some other moral intuitions, like “people shouldn’t be made worse off as a means to an end” (e.g. I don’t think we should have wars as a means to inspire poets). And presumably the people in the christian homeschools are worse off.
Maybe the compromise is something like:
On every day you are in the homeschool, we will “uplift” you if we think you would “ideally” want that.
Something like pre-existence theodicy, i.e. people born to Christian homeschooling parents consent to that life before they are incarnate, possibly in return for compensation (supposing something like souls or personal identity exists).
I’m hopeful the details can be fleshed out in late crunch-time.
You don’t want to be figuring out ethics in crunch time, you want to be figuring out how to pass off (or buy time to pass off) the question to something-like-CEV or a long reflection.
Christian homeschoolers from Buck’s thought experiment don’t just live the old lives, they also don’t even know that the Biblical mythology is filled with errors. I understand why the opt-out button is necessary (e.g. due to nostalgia-related drives or actual benefits attained by living in religious communities), but the kids are unlikely not to have the right to learn the ground truth obscured by myths.
Unlike Buck’s thought experiment, indigenous peoples have never been a part of the Euro-American[1] civilisation and there was no misaligned leader to rob them of the ground truth.
Or any other civilisation.
I think similar sentiments are largely a failure of the imagination. The possibilities demand a whole lot of imagination.
The only thing you can’t have post singularity is truly suffering people to help. And if you must have that and refuse to tweak your reward system so you don’t, you can enter a simulation where it seems exactly like you have that.
If you want a mundane existence you can simulate that until you’re bored, then join the crowd doing things that are really new and exciting.
You don’t stop being you from any little tweak to your reward system or memory. And they’re all reversible.
Intuitions fail here for a good reason.
The possibilities are limitless.
My mundane values care about real physical stuff not simulated stuff.
If you can’t tell the difference, how could you care which is which?
I’m talking about blocking your memories of living in a simulated world.
what’s the principle here? if an agent would have the same observations in world W and W’ then their preferences must be indifferent between W and W’ ? this seems clearly false.
I wouldn’t make that argument. I just don’t see the point of keeping it real.
It just seems like going virtual opens up a lot of possbilities with no downside. If you want consistency and real work, put it in your sim. Share it with other real people if you want to compromise on what worlds you’ll inhabit and what challenges you’ll face.
If you want real people who are really suffering so there are real stakes to play for, well, that’s an orthogonal issue. I’d rather see nonconsensual suffering eliminated.
So: Why prefer the Real? What’s it got that the Virtual doesn’t?
For example, the agent might decide that its utility function of anything that the agent knows to be virtual is close to zero because the agent believes in a real-world mission (e.g. Agent-2 was supposed to eventually reach the SC level and do actual AI-related research, but it was also trained to solve simulated long-term tasks like playing through video games)
As for reasons to believe that the contribution of anything virtual into the utility function is close to zero… one level is opportunity costs undermining real-world outcomes[1] in exchange for something useless (e.g. a schoolboy’s knowledge vs. missions passed in GTA). The next level is the reasons for real-world outcomes to be important. Before the possibility of a post-work future, society’s members were supposed to do work that others deem useful enough to pay for it, and it would somehow increase the well-being of the collective’s members or help the whole collective to reach its terminal goals (e.g. inspire its members to be more creative or work harder). The virtual world is known to be a superstimulus which could be as unlikely to increase the collective’s well-being as fast food causing people to become obese.
Including things like actual skills learned during games, as happened with Agent-2′s ability to solve long-term tasks.
You’re assuming that there will be a distinction post-singularity? Are you not a physicalist?
Why do you say this?
This is under the assumption that things have gone well, so everyone is very empowered to have the life they want, and the assumption of benevolent superintelligence that can help better than any human if someone wants help.
That seems pretty simplistic. Suppose you’re born on North Sentinel, but, despite your (unchosen and probably very forcefully inculcated) native culture, you decide you want to go off and join the, um, “galaxy brains” in the outside world. Do you have any realistic chance to do that? Applies in the other direction, too, for that matter.
Yes, I support something like uplifting, as described in other comments in this post.
OK, but then at what point are you going to measure whether somebody would “ideally” want to be “uplifted”? Are you going to take a newborn’s CEV, or an adult’s? Or somewhere in between? My guess is that insofar as you could define the CEV for a newborn at all, it would basically always be to get “uplifted”, whereas for an adult it would rarely be. The adult’s CEV would also not include having every child taken from the island.
I know you leave “ideally” undefined, and maybe it’s not CEV-like, but I don’t know what it could be like if it had to solve that problem.
I am curious what aspects of sci-fi utopias seem unappealing to you. Something like ‘The Culture’ for instance doesn’t have any downsides I can think of.
Seemed kind of boring to live in to me.
We didn’t get much pitch for the projects and challenges people in the culture do.
But yeah it did seem boring. At least in comparison to the challenge and purpose of Contact and SC.
I think that’s a failure of alignment in world and a necessity of writing for a broad audience from the outside.
What do you want from life, that the Culture doesn’t offer?
The possibility of mastering some niche thing that has some genuine demand so that me doing it will be the best way to satisfy the demand. Even better if it involves coming up with something that hasn’t ever been done before.
There’s a reason that Player of Games is about Contact and not (just) board games. However, Contact cannot exist WITHIN the Culture.
To answer more explicitly, I want to explore worlds where the wildness of nature has not been restrained.
I’m not sure what you mean, either in-universe or in the real world.
In-universe, the Culture isn’t all powerful. Periodically they have to fight a real war, and there are other civilizations and higher powers. There are also any number of ways and places where Culture citizens can go in order to experience danger and/or primitivism. Are you just saying that you wouldn’t want to live out your life entirely within Culture habitats?
In the real world… I am curious what preference for the fate of human civilization you’re expressing here. In one of his novels, Olaf Stapledon writes of the final and most advanced descendants of Homo sapiens (inhabiting a terraformed Neptune) that they have a continent set aside as “the Land of the Young”, a genuinely dangerous wilderness area where the youth can spend the first thousand years of their lives, reproducing in miniature the adventures and the mistakes of less evolved humanity, before they graduate to “the larger and more difficult world of maturity”. But Stapledon doesn’t suppose that his future humanity is at the highest possible level of development and has nothing but idle recreations to perform. They have serious and sublime civilizational purposes to pursue (which are beyond the understanding of mere humans like ourselves), and in the end they are wiped out by an astronomical cataclysm. How’s that sound to you?
It sounds like Stapledon’s scenario still has grown-up humans able to work on some sort of cutting edge stuff for their civilization. This is very different from the Culture, where humans are basically stuck as housecats and any new stuff is complex enough that only the Minds can tackle it.
Have you read Friendship is Optimal? Is that outcome unappealing to you in a way which can’t be easily patched (e.g. removing the “become a pony” requirement)? Do you think it would be unappealing to almost everyone in ways that can’t be easily patched?
In a word, yes. Very unappealing.
Why?
The whole “abrupt destructive upload” thing is going to be a hard no for at least a lot of people, even if they buy into “it’s just as good to be simulated”… which I suspect most people would not and could not easily be convinced to.
That is easily patched via full-dive VR. You can keep your biological body. No upload be necessary.
Celestia wont’ like that. It costs a lot, and, worse, it makes her access to your values imperfect, so she can’t satisfy them absolutely optimally. Which is presumably why she never offers it to anybody in the story.
Even without an upload, however, she does understand you well enough to manipulate you and modify your values. She won’t let herself edit your mind directly, but she’s more than happy to feed you whatever stimuli are necessary to convince you to change your values, or perhaps to shift to a different attractor within your dynamic value trajectory, and take that upload.
She can apparently do that while still keeping your values satisfied enough at each moment that she doesn’t find herself compelled to stop. She makes a lot of people’s lives outside of Equestria suck in order to get them uploaded, but that doesn’t seem to be enough of a violation of their values to stop her. She constantly manipulates people in ways humans wouldn’t tolerate other humans doing.
She apparently manages to suck in literally everybody in the world except for one guy (or maybe him and a small handful of others who died before him). And I’m sure that many of the people who ended up valuing being uploaded wouldn’t have approved of the manipulation before she started in on them.
How do you patch that out of her? I mean, I don’t know how to build her to begin with, but it doesn’t seem like a small patch.
I’d be pretty unhappy if things stayed the same for me. I’d want at least some radical things, like curing aging. But still, I will definitely want to be able to decide for myself and not be forced into anything in particular (unless really necessary for some reason, like to save my life, but hopefully it won’t be).
Sort of agree, but I think there are paths to gradual mind uploading that I’m happy with. It’s probably worth it for me, though it’s also very likely that enough people will want to be corporeal on Earth that we won’t disassemble Sol for its free energy.
How do you know for certain there will be an ’I” in this “crazy sci-fi world”?
If you don’t… then it seems like a incoherent thing to even discuss.
Edit: Removed previous edit due to Screwtape having apologized.
Bella: “I made a bet on this coin flip. If it comes up heads, I’m going to use the money to go out for dinner! Hrm, where would I want to eat if I win. . .”
Carl: “How do you know for certain it will come up heads? If you don’t, it seems like an incoherent thing to even discuss.”
Bella: “I’m not certain it will happen. It would be a bad idea to put too much weight or too many assumptions on something with only a 50% probability. But things can be both uncertain to happen and also coherent enough to talk about.”