Cookies vs Existential Risk

I’ve been thinking for a while now about the possible trade-offs between present recreation and small reductions in existential risk, and I’ve finally gotten around to a (consequentialist) utilitarian analysis.

ETA: Most of the similar mathematical treatments I’ve seen assume a sort of duty to unrealized people, such as Bostrom’s “Astronomical Waste” paper. In addition to avoiding that assumption, my aim was to provide a more general formula for someone to use, in which they can enter differing beliefs and hypotheses. Lastly I include 3 examples using widely varying ideas, and explore the results.

Let’s say that you’ve got a mind to make a batch of cookies. That action has a certain amount of utility, from the process itself and/​or the delicious cookies. But it might lessen (or increase) the chances of you reducing existential risk, and hence affect the chance of existential disaster itself. Now if these cookies will help x-risk reduction efforts (networking!) and be enjoyable, the decision is an easy one. Same thing if they’ll hurt your efforts and you hate making, eating, and giving away cookies. Any conflict arises when cookie making/​eating is in opposition to x-risk reduction. If you were sufficiently egoist then risk of death would be comparable to existential disaster, and you should consider the two risks together. For readability I’ll refer simply to existential risk.

The question I’ll attempt to answer is: what reduction in the probability of existential disaster makes refraining from an activity an equally good choice in terms of expected utility? If you think that by refraining and doing something else you would reduce the risk at least that much, then rationally you should pursue the alternative. If refraining would cut risk by less than this value, then head to the kitchen.

*ASSUMPTIONS: For simplicity I’ll treats existential disaster as an abstract singular event, which we’ll survive or not. If we do, it is assumed that we do so in a way such that there are no further x-risks. Further I’ll assume the utility realized past that point is not dependent on the cookie-making decision in question, and that the utility realized before that point is not dependent on whether existential disaster will occur. The utility calculation is also unbounded, being easier to specify. It is hoped that those not seeking to approximate having such a utility function can modify the treatment to serve their needs. *

E(U|cookies) = E(U|cookies, existential disaster) + Upost-risk-future • P(x-risk survival | cookies)

E(U|alternative) = E(U|alternative, existential disaster) + Upost-risk-future • P(x-risk survival | alternative)

Setting these two expected utilities to be equal we get:

E(U|cookies, existential disaster) - E(U|alternative, existential disaster) = Upost-risk-future • ( P(x-risk survival | alternative) - P(x-risk survival | cookies))

or

ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future

Where ΔP(x-risk survival) = P(x-risk survival | alternative) - P(x-risk survival | cookies)

and ΔE(U|existential disaster) = E(U|cookies, existential disaster) - E(U|alternative, existential disaster)

*I’m assuming both of these quantities are positive. Otherwise, there’s no conflict.*

Now to determine the utilities:

base value(utility/​time) is a constant for normalizing to ΔE(U|existential disaster) and factors out of our ratio, but it can give us a scale of comparison. Obviously you should use the same time scale for the integral limits. si(t) (range ≥ 0) is the multiplier for the change in subjective time due to faster cognition, hi(t) (range = all real numbers) is the multiplier for the change in the base value(utility/​time), and Di(t) (0 ≤ range ≤ 1) is your discount function. All of these functions are with reference to each morally relevant entity i, assuming yourself as i = 1.

There are of course a variety of ways to do this kind of calculation. I felt the multiplication of a discount function with increases in both subjective time quantity and quality, integrated over the time period of interest and summed across conscious entitites, was both general and intuitive.

There’re far too many variables here to summarize all possibilities with examples, but I’ll do a few, with both pure egoist and agent-neutral utilitarian perspectives (equal consideration of yours and others’ wellbeing). I’ll assume the existential disaster would occur in 30 years, keeping in mind that the prior/​common probability of disaster doesn’t actually affect the calculation. I’ll also set most of the functions to constants to keep it straightforward.

Static World

Here we assume that life span does not increase, nor does cognitive speed or quality of life. You’re contemplating making cookies, which will take 1 hour. base value(utility/​time) of current life is 1 utility/​hour, you expect to receive 2 extra utility by making cookies and will also obtain 1 utility/​hour you live in a post-risk-future, which will be 175,200 hours over an assumed extra 20 years. For simplicity we’ll assume no discounting, and start off with a pure egoist perspective. Then:

ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future = 2175,200 = 0.00114%, which might be too much to expect from working for one hour instead.

For an agent-neutral utilitarian, we’ll assume there’s another 2 utility that others gain from your cookies. We’ll include only the ≈6.7 billion currently existing people, who have a current world life expectancy of 67 years and average age of 28.4, which would give them each 75,336 utility over 8.6 years in a post-risk-future. Then:

ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future = 4/​(75,336 • 6,700,000,000) =0.000000000000792%. You can probably reduce existential risk this much with one hour of work, but then you’re probably not a pure agent-neutral utilitarian with no time discounting. I’m certainly not.

Conservative Transhuman World

In this world we’ll assume that people live about a thousand years, a little over 10 times conventional expectancy. We’ll also assume they think 10 times as fast and each subjective moment has 10 times higher utility. I’m taking that kind of increase from the hedonistic imperative idea, but you’d get the same effect by just thinking 100 times faster than we do now. Keeping it simple I’ll treat these improvements as happening instantaneously upon entering a post-risk-future. On a conscious level I don’t discount posthuman futures, but I’ll set Di(t) = e-t/​20 anyway. For those who want to check my math, the integral of that function from 30 to 1000 is 4.463.

Though I phrased the equations in terms of baked goods, they of course apply to any decision of both greater existential risk and enjoyment. Let’s assume you’re able to forgo all pleasure now in terms of the greatest future pleasure, through existential risk reduction. In our calculation, this course of action is “alternative”, and living like a person unaware of existential risk is “cookie”. Our base value(utility/​time) is an expected 1 utility/​year of “normal” life (a very different scale from the last example), and your total focus would realize a flat 0 utility for those first 30 years. For a pure egoist:

ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future = 30446.26 =6.72%. This might be possible with 30 years of the total dedication we’re considering, especially with so few people working on this, but maybe it wouldn’t.

For our agent-neutral calculation, we’ll assume that your total focus on the large scale results in 5 fewer utility for those who won’t end up having as much fun with the “next person” as with you, subtracted by the amount you might uniquely improve the lives of those you meet while trying to save the world. If they all realize utility similar to yours in a post-risk world, then:

ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future = 35/​(446.26*6,700,000,000) = 0.00000000117%. With 30 years of dedicated work this seems extremely feasible.

And if you hadn’t used a discount rate in this example, the ΔP(x-risk survival) required to justify those short-term self-sacrifices would be over 217 times smaller.

Nick Bostrom’s Utopia

Lastly I’ll consider the world described in Bostrom’s “Letter From Utopia”. We’ll use the same base value(utility/​time) of 1 utility/​year of “normal” life as the last example. Bostrom writes from the perspective of your future self: “And yet, what you had in your best moment is not close to what I have now – a beckoning scintilla at most. If the distance between base and apex for you is eight kilometers, then to reach my dwellings would take a million light-year ascent.” Taken literally this translates to hi(t) = 1.183 • 1018. I won’t bother treating si(t) as more than unity; though likely to be greater, that seems like overkill for this calculation. We’ll assume people live till most stars burn out, approximately 1014 years from now (if we find a way during that time to stop or meaningfully survive the entire heat death of the universe, it may be difficult to assign any finite bound to your utility). I’ll start by assuming no discount rate.

Assuming again that you’re considering focusing entirely on preventing existential risk, then ΔP(x-risk survival) = ΔE(U|existential disaster) /​ Upost-risk-future =30/​(1.183 • 1032) = 0.0000000000000000000000000000254%. Even if you were almost completely paralyzed, able only to blink your eyes, you could pull this off. For an agent-neutral utilitarian, the change in existential risk could be about 7 billion times smaller and still justify such dedication. While I don’t believe in any kind of obligation to create new people, if our civilization did seed the galaxy with eudaimonic lives, you might sacrifice unnecessary daily pleasures for a reduction in risk 1,000,000,000,000,000,000,000 smaller still. Even with the discount function specified in the last example, a pure egoist would still achieve the greatest expected utility or enjoyment from an extreme dedication that achieved an existential risk reduction of only 0.000000000000000568%.

Summary

The above are meant only as illustrative examples. As long as we maintain our freedom to improve and change, and do so wisely, I put high probability on post-risk-futures gravitating in the direction of Bostrom’s Utopia. But if you agree with or can tolerate my original assumptions, my intention is for you to play around, enter values you find plausible, and see whether or how much your beliefs justify short term enjoyment for its own sake.

Lastly, keep in mind that maximizing your ability to reduce existential risk almost certainly does not include forgoing all enjoyment. For one thing, you’ll have at least a little fun fighting existential risk. Secondly, we aren’t (yet) robots and we generally need breaks, some time to relax and rejuvenate, and some friendship to keep our morale up (as well as stimulated or even sane). Over time, habit-formation and other self-optimizations can reduce some of those needs, and that will only be carried through if you don’t treat short term enjoyment as much more than an element of reducing existential risk (assuming your analysis suggests you avoid doing so). But everyone requires “balance”, by definition, and raw application of willpower won’t get you nearly far enough. It’s an exhaustible resource, and while it can carry you through several hours or a few days, it’s not going to carry you through several decades.

The absolute worst thing you could do, assuming once again that your analysis justifies a given short term sacrifice for greater long term gain, is to give up. If your resolve is about to fail, or already has, just take a break to really relax, however long you honestly need (and you will need some time). Anticipating how effective you’ll be in different motivational states (which can’t be represented by a single number), and how to best balance motivation and direct application, is an incredibly complex problem which is difficult or impossible to quantify. Even the best solutions are approximations, people usually apply themselves too little and sometimes too much. But to do so and suffer burnout provides no rational basis for throwing up your hands in desperation and calling it quits, at least for longer than you need to. To an extent that we might not yet be able to imagine, someday billions or trillions of future persons, including yourself, may express gratitude that you didn’t.