The Upper Limit of Value

I am happy to announce a new paper I co-wrote with Anders Sandberg, which is now a public preprint (Note: PDF). The abstract is below, followed by a brief sketch of some of what we said in the paper.

Abstract: How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and the ability of physical beings to place value on outcomes. We discuss how this argument can handle lexicographic preferences, probabilities, and the implications for infinite ethics and ethical uncertainty.

Physics is Finite and the Near-Term

First, there is a claim underlying our argument, that our current understanding of physics is sufficient to conclude that the accessible universe is finite in volume, in time, and in amount of information which can be stored. (The specific arguments for this are in the appendix of the paper.) We also assume humans are physical beings, without access to value unconnected to the physical world. Anything valued in their mind is part of a physical process.

Given those two claims, we start out with a discussion of purely economic value, and the short term future, specifically the next 100,000 years. During that time, the speed of light means that humanity will only have access to the Milky Way Galaxy. In the optimistic case that we colonize the galaxy, the rate of growth in economic value is limited to the polynomial increase in accessible matter and volume of space. This implies that indefinite exponential economic growth is impossible. In fact, as we suggest in the paper, the limit to exponential growth is almost certainly well below 1% over that time frame.

This has some interesting implications for economic discussions about the proper discount rate for the far-future, for the hinge-of-history hypothesis, and the argument that humanity will reach an economic singularity—or at least one where growth will continue indefinitely at an accelerating pace.

Value-in-General is Finite, Even When it Isn’t

The second half of our paper discusses value more generally, in the philosophical sense. Humans often remark that some things, like human life, are “infinitely valuable.” Despite economic evidence that this is not literally true, and taking this claim at face value, we argue that value is still limited.

In philosophy, preferences involving infinities are referred to as “lexicographic,” in the sense used in computer science to refer to sorting. Any amount of a “lexicographically inferior” good, like blueberries, is less useful than a single “lexicographically superior” good, say, human lives. Still, in a finite universe, no infinities are needed to represent this “infinite preference.” To quote from the paper:

We can consider a finite universe with three goods and lexicographic preferences . We denote the number of each good , and the maximum possible of each in the finite universe as . Set }. We can now assign utility for a bundle of goods . This assignment captures the lexicographic preferences exactly. This can obviously be extended to any finite number of goods , with a total of different goods, with any finite maximum of each.

(You should read the paper for a fuller account of the argument, and for the footnotes that I left out of this quote.)

The above argument does not deal with expected utility, but in the paper we claim that not only are zero and one not probabilities, but neither are or . That is, we argue that it would be effectively incoherent to assign an infinitesimal probability in order to reach an infinite expected value. We also discuss why Boltzmann brains, and non-causal decision theories don’t refute this claim—but for all of those, you’ll need to read the paper.

Given all of this, we’d love feedback and discussion, either as comments here, or as emails, etc. Finally, I’ll quote the paper a final time for the acknowledgements—not only was it awesome for me to co-write a paper with Anders, but we got feedback from a variety of really incredible people.

We are grateful to the Global Priorities Institute for highlighting these issues and hosting the conference where this paper was conceived, and to Will MacAskill for the presentation that prompted the paper. Thanks to Hilary Greaves, Toby Ord, and Anthony DiGiovanni, as well as to Adam Brown, Evan Ryan Gunter, and Scott Aaronson, for feedback on the philosophy and the physics, respectively. David Manheim also thanks the late George Koleszarik for initially pointing out Wei Dai’s related work in 2015, and an early discussion of related issues with Scott Garrabrant and others on asymptotic logical uncertainty, both of which informed much of his thinking in conceiving the paper. Thanks to Roman Yampolskiy for providing a quote for the paper. Finally, thanks to Selina Schlechter-Komparativ and Eli G. for proofreading and editing assistance.