Let’s make a “Rational Immortalist Sequence”. Suggested Structure.

Why Don’t Futurists Try Harder to Stay Alive?, asks Rob Wiblin at Overcoming Bias

Suppose you want to live for more than 10 thousand years. (I’ll assume that suffices for the “immortalist” designation). Many here do.

Suppose in addition that this is by far, very far, your most important goal. You’d sacrifice a lot for it. Not all, but a lot.

How would you go about your daily life? In which direction would you change it?

I want to examine this in a sequence, but I don’t want to write it on my own, I’d like to do it with at least one person. I’ll lay out the structure for the sequence here, and anyone who wants to help, by writing an entire post (these or others), or parts of many, please contact me in the comments, or message. Obviously we don’t need all these posts, they are just suggestions. The sequence won’t be about whether it is a good idea to do that. Just assume that the person wants to achieve some form of Longevity Escape Velocity. Take as a given that it is what an agent wants, what should she do?

1) The Ideal Simple Egoistic Immortalist—I’ll write this one, the rest is up for grabs.

Describes the general goal of living long, explains it is not about living long in hell, about finding mathy or Nozickian paradoxes, about solving the moral uncertainty problem. It is just simply trying to somehow achieve a very long life worth living. Describes the two main classes of optimization 1)Optimizing your access to the resources that will grant immortality 2)Optimizing the world so that immortality happens faster. Sets “3)Diminish X-risk” aside for the moment, and moves on with a comparison of the two major classes.

2) Everything else is for nothing if A is not the case -

Shows the weaker points (A’s) of different strategies. What if uploads don’t inherit the properties in virtue of which we’d like to be preserved? What if cryonics facilities are destroyed by enraged people? What if some X-risk obtains, you die with everyone else? What if there is no personal identity in the relevant sense and immortality is a desire without a referent (a possible future world in which the desired thing obtains)? and as many other things as the poster might like to add.

3) Immortalist Case study—Ray Kurzweil -

Examines Kurzweil strategy, given his background (age, IQ, opportunities given while young etc...). Emphasis, for Kurzweil and others, on how optimal are their balances for classes one and two of optimization.

4) Immortalist Case study—Aubrey de Grey -

5) Immortalist Case study—Danila Medvdev -

Danila has been filming everything he does hours a day. I don’t know much else, but suppose he is worth examining.

6) Immortalist Case study—Peter Thiel

7) Immortalist Case study—Laura Deming

She’s been fighting death since she was 12, went to MIT to research on it, and recently got a Thiel fellowship and pivoted to fundraising. She’s 20.

8) Immortalist Case study—Ben Best

Ben Best directs Cryonics Institute. He wrote extensively on mechanisms of ageing, economics and resource acquisition, and cryonics. Lots can be learned from his example.

9) Immortalist Case study—Bill Faloon

Bill is a long time cryonicist, he founded the Life Extension Foundation decades ago, and to this day makes a lot of money out of that. He’s a leading figure in both the Timeship project (super-protected facility for frozen people) and in gathering the cryonics youth togheter.

10) How old are you? How much are you worth? How that influences immortalist strategies. - This one I’d like to participate.

11) Creating incentives for your immortalism—this one I’ll write

How to increase the amount of times that reality strikes you with incentives that make you more likely to pursue the strategies you should pursue, being a simple egoistic immortalist.

12, 13, 14 …. If it suits the general topic, it could be there. Also previous posts about related things could be encompassed.

Edit: The suggestion is not that you have to really want to be the ideal immortalist to take part in writing a post. My goals are far from being nothing but an immortalist. But I would love to know, were it the case, what should I be doing? First we get the abstraction. Then we factor in everything else about us and we have learned something from the abstraction.

Seems people were afraid that by taking part in the sequence they’d be signalling that their only goal is to live forever. This misses both the concept of assumption, and the idea of an informative idealized abstraction.

What I’m suggesting we do here with immortality could just as well be done with some other goal like “The Simple Ideal Anti-Malaria Fighter” or “The Simple Ideal Wannabe Cirque de Soleil”.

So who wants to play?