Saying the quiet part out loud: trading off x-risk for personal immortality

Statement: I want to deliberately balance the caution and the recklessness in developing AGI, such that it gets created in the last possible moment so that I and my close ones do not die.

This Statement confuses me. There are several observations I can make about it. There are also many questions I want to ask but have no idea how to answer. The goal of this post is to deconfuse myself, and to get feedback on the points that I raised (or failed to raise) below.


First observation: The Statement is directly relevant to LW interests.

It ties together the issues of immortality and AI risk, both of which are topics people here are interested in. There are countless threads, posts and discussions about high-level approaches to AI safety, both in the context of “is” (predictions) and “ought” (policy). At the same time, there is still a strong emphasis on the individual action, deliberating on which choices to make to improve the to marginal effects of living life in a certain way. The same is true for immortality. It has been discussed to death, both from the high-level and from the individual, how-do-I-sign-up-for-Alcor point of view. The Statement has been approached from the “is”, but not from the “ought” perspective. At the same time:

Second observation: No one talks about the Statement.

I have never met anyone who expressed this opinion, neither in-person nor online, even after being a part (although, somewhat on the periphery) of the rationalist community for several years. Not only that, I have not been able to find any post or comment thread on LW or SSC/​ACX that discusses it, argues for or against it, or really gives it any attention whatsoever. I am confused by this since the Statement seems to be fairly straightforward.

One reason might be the:

Third observation: Believing in the Statement is low status, as it constitutes an almost-taboo opinion.

Not only no one is discussing it, but the few times when I expressed the Statement in person (at EA-infiltrated rationalists meetups), it was treated with suspicion or hostility. Although to be honest, I’m not sure how much this is me potentially misinterpreting the reactions. I got the impression that it is seen as sociopathic. Maybe it is?

Fourth observation: Believing in the Statement is incompatible with long-termism, and it runs counter to significantly valuing future civilisation in general.

Fifth observation: Believing in the Statement is compatible with folk morality and revealed preferences of most of the population.

Most people value their lives, and the lives of those around them to a much greater extent than those far away from them. This is even more true for the future lives. The revealed-preference discount factor is bounded away from 1.

Sixth observation: The Statement is internally consistent.

I don’t see any problems with it on the purely logical level. Rational egoism (or variants thereof) constitutes a valid ethical theory, although it is potentially prone to self-defeat.

Seventh observation: Because openly admitting to believing in the Statement is disadvantageous, it is possible that many people in fact hold this opinion secretly.

I have no idea how plausible this is. Judging this point is one of my main goals in writing this post. The comments are a good place for debating the meta-level points, but, if I am right about the cost of holding this opinion—not so much for counting its supporters. An alternative is this anonymous poll I created—please vote if you’re reading this.

Eighth observation: The Statement has the potential to explain some of the variance of attitudes to AI risk-taking.

One way of interpreting this observation might be that people arguing against pausing/​for accelerating AI developments are intentionally hiding their motivations. But, using the standard Elephant-in-the-brain arguments about people’s tendency to self-delude about their true reasons for acting or believing in certain things, it seems possible that some of the differences between Pause-aligned and e/​acc-aligned crowd come down to this issue. Again, I have no idea how plausible this is. (People in e/​acc are—usually—already anonymous, which is a point against my theory).

Ninth observation: The Statement is not self-defeating in the sense of failure of ethical theories to induce coordination in the society/​escaping prisoner dilemmas.

Although the Statement relies on the ethical judgement of gambling the entire future of our civilisation on my meagre preferences for my life and life of people around me, everyone’s choices are to a greater or lesser extent aligned (depending on the similarity of life expectancy). Most probably we, as the “current population” survive together or die together—with everyone dying being the status quo. Trading off future lives does not create coordination problems (ignoring the possibility of acausal trade), leaving only the conflict between different age groups.

This leads me to the...

Tenth observation: Openly resolving the topic of the Statement leads to a weird political situation.

As we said, the Statement implicitly refers to the Pareto frontier of AI existential risk-taking, which is parametrised by the life expectancy of the individual. Therefore, assuming a sufficiently powerful political system, it is a negotiation about the cut-off point—who gets left behind and who gets to live indefinitely. There does not seem to be much negotiation space—fairly compensating the first group seems impossible. Thus, this seems like a high-conflict situation. In a real-life not-so-powerful system, it might lead to unilateral action of people left below the cut-off, but they might not have enough resources to carry out the research and development alone. Also, the probabilistic nature of that decision would probably not trigger any strong outrage response.

Eleventh observation: Much of the “working EA theory” might break when applied to the Statement.

How much do people posture altruism, and how much do they really care? EA signalling can be a good trade (10% of your income in exchange for access to the network and being well-liked overall). This was the argument SSC was making in one of the posts: the beauty of the EA today is that the inside doesn’t really matter—being motivated by true altruism and being motivated by status are both still bringing the same result at the end. This can, however, break if the stakes are raised sufficiently high. (I personally approach the problem of death with a pascal-wager-like attitude, but even with a discount rate of 0.9, the choice comes down to a 10x life value).

Twelfth observation: The Statement does not trivialise either way: the optimal amount of neither recklessness nor caution is zero.

The latter is a consensus (at least on LW). The former is not. The only take I ever heard advocating for this was that cryonics + certainly aligned AI is worth more in expectation than the chance of aligned AI in our lifetimes. For obvious reasons, I consider this to be a very weak argument.

Thirteenth observation: For a median believer of the Statement, we might be passing the point where the balance is being too heavily weighted towards pause.

The fundamental difference between the pause side and the accelerate side is that pause has a much bigger social momentum to it. Governments are good at delaying and introducing new regulations—not so much at speeding up and removing them (at least not recently). I can’t find a source, but I think EY admitted this in one of the interviews—the only good thing about the government is that it will grind everything to a complete stop. After seeing literally all major governments take interest (US executive order, UK summit), the academia heavily leaning towards pause, and the general public being more and more scared of AI—I worry that the pause is going to be too entrenched, will have too much political momentum to stop and reverse course. I don’t know how much should I consider the market forces being a counterweight.

Fourteenth observation: Taking into account s-risks in addition to x-risk might change the tradeoff.

A final argument about which I am unsure—it seems plausible that the Statement should not be endorsed by anyone because of the potential of unaligned AI to cause suffering. But I do not think s-risks are sufficiently probable to pause indefinitely.


So: Am I the only one thinking those thoughts? Am I missing any obvious insights? Is that really a taboo view? Was it debated before and resolved definitely one way or another? Is this some standard philosophical problem that already has a Google-able name? Is there any downside to openly discussing this whole thing that I don’t see?