Rationalists don’t care about the future

Related to Exterminating life is rational.

ADDED: Standard assumptions about utility maximization and time-discounting imply that we shouldn’t care about the future. I will lay out the problem in the hopes that someone can find a convincing way around it. This is the sort of problem we should think about carefully, rather than grasping for the nearest apparent solution. (In particular, the solutions “If you think you care about the future, then you care about the future”, and, “So don’t use exponential time-discounting,” are easily-grasped, but vacuous; see bullet points at end.)

The math is a tedious proof that exponential time discounting trumps geometric expansion into space. If you already understand that, you can skip ahead to the end. I have fixed the point raised by Dreaded_Anomaly. It doesn’t change my conclusion.

Suppose that we have Planck technology such that we can utilize all our local resources optimally to maximize our utility, nearly instantaneously.

Suppose that we colonize the universe at light speed, starting from the center of our galaxy (we aren’t in the center of our galaxy; but it makes the computations easier, and our assumptions more conservative, since starting from the center is more favorable to worrying about the future, as it lets us grab lots of utility quickly near our starting point).

Suppose our galaxy is a disc, so we can consider it two-dimensional. (The number of star systems expanded into per unit time is well-modeled in 2D, because the galaxy’s thickness is small compared to its diameter.)

The Milky Way is approx. 100,000 light-years in diameter, with perhaps 100 billion stars. These stars are denser at its center. Suppose density changes linearly (which Wikipedia says is roughly true), from x stars/​sq. light year at its center, to 0 at 50K light-years out, so that the density at radius r light-years is x(50,000-r). We then have that the integral over r = 0 to 50K of 2πrx(50000-r)dr = 100 billion, 2πx(50000∫rdr - ∫r2dr) = 100 billion, x = 100 billion /​ 2π(50000∫rdr - ∫r2dr) = 100 billion /​ π[(50000r2 − 2r3/​3) from r=0 to 50K = π(50000(50000)2 − 2(50000)3/​3) = 500003π(1 − 23)] = 100 billion /​ 130900 billion = .0007639.

We expand from the center at light speed, so our radius at time t (in years) is t light-years. The additional area enclosed in time dt is 2πtdt, which contains 2πtx(50000-t)dt stars.

Suppose that we are optimized from the start, so that expected utility at time t is proportional to number of stars consumed at time t. Suppose, in a fit of wild optimism, that our resource usage is always sustainable. (A better model would be that we completely burn out resources as we go, so utility at time t is simply proportional to the ring of colonization at time t. This would result in worrying a lot less about the future.) Total utility at time t is 2πx∫t(50000-t)dt from 0 to t = 2πx(50000t2/​2 - t3/​3) ≈120t2 - .0016t3.

Our time discounting for utility is related to that we find empirically today, encoded in our rate of return on investment, which roughly doubles every ten years. Suppose that, with our Planck technology, subjective time is Y Planck-tech years = 1 Earth year, so our time discounting says that utility x at time t is worth utility x/​2.1Y at time t+1. Thus, the utility that we, at time 0, assign to time t, with time discounting, is (120t2 - .0016t3) /​ 2.1Yt. The total utility we assign to all time from now to infinity is the integral, from t=0 to infinity, of (120t2 - .0016t3) /​ 2.1Yt.

Look at that exponential, and you see where this is going.

Let’s be optimistic again, and drop the .0016t3, even though including it would make us worry less about the future. <CORRECTION DUE TO Dreaded_Anomaly> Rewrite 2.1Yt as (2.1Y)t = eat, a = .1Yln2. Integrate by parts to see ∫t2e-atdt = -e-at(t2/​a + 2t/​a2 + 2/​a3). Then ∫120t2/​2.1Ytdt = 120∫t2e-atdt = −120e-at(t2/​a + 2t/​a2 + 2/​a3) from t=0 to infinity.</​CORRECTION DUE TO Dreaded_Anomaly>

For Y = 1 (no change in subjective time), t=0 to infinity, this is about 6006. For comparison, the integral from t=0 to 10 years is about 5805. Everything after the first 10 years accounts for 3.3% of total utility over all time, as viewed by us in the present. For Y = 100, the first 10 years account for all but 1.95 x 10-27 of the total utility.

What all this math shows is that, even making all our assumptions so as to unreasonably favor getting future utility quickly and having larger amounts of utility as time goes on, time discounting plus the speed of light plus the Planck limit means the future does not matter to utility maximizers. The exponential loss due to time-discounting always wins out over the geometric gains due to expansion through space. (Any space. Even supposing we lived in a higher-dimensional space would probably not change the results significantly.)

Here are some ways of making the future matter:

  • Assume that subjective time will change gradually, so that each year of real time brings in more utility than the last.

  • Assume that the effectiveness at utilizing resources to maximize utility increases over time.

  • ADDED, hat tip to Carl Shulman: Suppose some loophole in physics that lets us expand exponentially, whether through space, additional universes, or downward in size.

  • ADDED: Suppose that knowledge can be gained forever at a rate that lets us increase our utility per star exponentially forever.

The first two don’t work:

  • Both these processes run up against the Planck limit pretty soon.

  • However far the colonization has gone when we run up against the Planck limit, the situation at that point will be worse (from the perspective of wanting to care about the future) than starting from Earth, since the rate of gain per year in utility divided by total utility is smaller the further out you go from the galactic core.

So it seems that, if we maximize expected total utility with time discounting, we need not even consider expansion beyond our planet. Even the inevitable extinction of all life in the Universe from being restricted to one planet scarcely matters in any rational utility calculation.

Among other things, this means we might not want to turn the Universe over to a rational expected-utility maximizer.

I know that many of you will reflexively vote this down because you don’t like it. Don’t do that. Do the math.

ADDED: This post makes it sound like not caring about the future is a bad thing. Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present. For example, while a FAI that doesn’t care about the future might neglect expansion into space, it won’t kill 90% of the people on earth because they pose a threat during this precarious transition period.

ADDED: Downvoting this is saying, “This is not a problem”. And yet, most of those giving their reasons for downvoting have no arguments against the math.

  • If you do the math, and you find you don’t like the outcome, that does not prove that your time-discounting is not exponential. There are strong reasons for believing that time-discounting is exponential; whereas having a feeling that you hypothetically care about the future is not especially strong evidence that your utility function is shaped in a way that makes you care about the future, or that you will in fact act as if you cared about the future. There are many examples where people’s reactions to described scenarios do not match utility computations! You are reading LessWrong; you should be able to come up with a half-dozen off the top of your head. When your gut instincts disagree with your utility computations, it is usually evidence that you are being irrational, not proof that your utility computations are wrong.

  • I am fully aware that saying “we might not want to turn the Universe over to a rational expected-utility maximizer” shows I am defying my utility calculations. I am not a fully-rational expectation maximizer. My actions do not constitute a mathematical proof; even less, my claims in the abstract about what my actions would be. Everybody thinks they care about the future; yet few act as if they do.

  • The consequences are large enough that it is not wise to say, “We can dispense with this issue by changing our time-discounting function”. It is possible that exponential time-discounting is right, and caring about the future is right, and that there is some subtle third factor that we have not thought of that works around this. We should spend some time looking for this answer, rather than trying to dismiss the problem as quickly as possible.

  • Even if you conclude that this proves that we must be careful to design an AI that does not use exponential time-discounting, downvoting this topic is a way of saying, “It’s okay to ignore or forget this fact even though this may lead to the destruction of all life in the universe.” Because the default assumption is that time-discounting is exponential. If you conclude, “Okay, we need to not use an exponential function in order to not kill ourselves”, you should upvote this topic for leading you to that important conclusion.

  • Saying, “Sure, a rational being might let all life in the Universe die out; but I’m going to try to bury this discussion and ignore the problem because the way you wrote it sounds whiny” is… suboptimal.

  • I care about whether this topic is voted up or down because I care (or at least think I care) about the fate of the Universe. Each down-vote is an action that makes it more likely that we will destroy all life in the Universe, and it is legitimate and right for me to argue against it. If you’d like to give me a karma hit because I’m an ass, consider voting down Religious Behaviourism instead.