A system of infinite ethics

One unresolved problem in ethics is that aggregate consequentialist ethical theories tend to break down if the universe is infinite. An infinite universe could contain both an infinite amount of good and an infinite amount of bad. If so, you are unable to change the total amount of good or bad in the universe, which can cause aggregate consequentialist ethical systems to break.

There has been a variety of methods considered to deal with this. However, to the best of my knowledge all proposals either have severe negative side-effects or are intuitively undesirable for other reasons.

Here I propose a system of aggregate consequentialist ethics intended to provide reasonable moral recommendations even in an infinite universe. I would like to thank JBlack and Miranda Dixon-Luinenburg for helpful feedback on earlier versions of this work.

My ethical system is intended to satisfy the desiderata for infinite ethical systems specified in Nick Bostrom’s paper, “Infinite Ethics”. These are:

  • Resolving infinitarian paralysis. It must not be the case that all humanly possible acts come out as ethically equivalent.

  • Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.

  • Preserving the spirit of aggregative consequentialism. If we give up too many of the intuitions that originally motivated the theory, we in effect abandon ship.

  • Avoiding distortions. Some remedies introduce subtle distortions into moral deliberation

I have yet to find a way in which my system fails any of the above desiderata. Of course, I could have missed something, so feedback is appreciated.

My ethical system

First, I will explain my system.

My ethical theory is, roughly, “Make the universe one agents would wish they were born into”.

By this, I mean, suppose you had no idea which agent in the universe it would be, what circumstances you would be in, or what your values would be, but you still knew you would be born into this universe. Consider having a bounded quantitative measure of your general satisfaction with life, for example, a utility function. Then try to make the universe such that the expected value of your life satisfaction is as high as possible if you conditioned on you being an agent in this universe, but didn’t condition on anything else. (Also, “universe” above means “multiverse” if there is one.)

In the above description I didn’t provide any requirement for the agent to be sentient or conscious. Instead, all it needs is preferences. If you wish, you can modify the system to give higher priority to the satisfaction of agents that are sentient or conscious, or you can ignore the welfare of non-sentient or non-conscious agents entirely.

Calculate satisfaction as follows. Imagine hypothetically telling an agent everything significant about the universe, and then optionally giving them infinite processing power and infinite time to think. Ask them, “Overall, how satisfied are you with that universe and your place in it”? That is the measure of their satisfaction with the universe. Giving them infinite processing power isn’t strictly necessary, and doesn’t do the heavy lifting of my ethical system. But it could be helpful for allowing creatures time to reflect on what they really want.

It’s not entirely clear how to assign a prior over situations in the universe you could be born into. Still, I think it’s reasonably intuitive that there would be some high-entropy situations among the different situations in the universe. This is all I assume for my ethical system.

Now I’ll give some explanation of what this system recommends.

First off, my ethical system requires you to use a decision theory other than causal decision theory. In general, you can’t have any causal affect on the moral desirability of the universe as I defined it, leading to infinitarian paralysis. However, you can still have acausal effects, so other decision theories can consider these effects can still work.

Suppose you are in a position to do something to help the creatures of the world or or our section of the universe. For example, suppose you have the ability to create friendly AI. to the world, for example by creating friendly AI or something. And you’re using my ethical system and considering whether to do it. If you do decide to do it, then that logically implies that any other agent sufficiently similar to you and in sufficiently similar circumstances would also do it. Thus, if you decide to make the friendly AI, then the expected value of an agent in circumstances of the form, “In a world with someone very similar to <insert description of yourself> who has the ability to make safe AI” is higher. And the prior probability of ending up in such a world is non-zero. Thus, by deciding to make the safe AI, you can acausally increase the total moral value of the universe, and so my ethical system would recommend doing it.

Similarly, the system also allows you to engage in acausal trades to improve parts of the universe quite unlike your own. For example, suppose there are some aliens who are indifferent to the suffering of other creatures and only care about stacking pebbles. And you are considering making an acausal trade with them that in which they will avoid causing needless suffering in their section of the universe if you stack some pebbles in your own section. By deciding to stack the pebbles, you acausally make other agents in sufficiently similar circumstances to yours also stack pebbles, and thus make it more likely that the pebble stackers would avoid causing needless suffering. Thus, the expected value of life satisfaction of a creature in the circumstances, “a creature vulnerable to suffering that is in in a world of pebble-stackers who don’t terminally value avoiding suffering” would increase. If the harm (if any) of stacking some pebbles is sufficiently small and the benefits to the creatures in that world are sufficiently large, then my ethical system could recommend making the acausal trade.

The system also values helping as many agents as possible. If you only help a few agents, the prior probability of an agent ending up in situations just like those agents would be low. But if you help a much broader class of agents, the effect on the prior expected life satisfaction would be larger.

These all seem like reasonable moral recommendations.

I will now discuss how my system does on the desiderata.

Infinitarian paralysis

Some infinite ethical systems result in what is called “infinitarian paralysis”. This is the state of an ethical system being indifferent in its recommendations in worlds that already have infinitely large amounts of both good and bad. If there’s already an infinite amount of both good and bad, then our actions, using regular cardinal arithmetic, are unable to change the amount of good and bad in the universe.

My system does not have this problem. To see why, remember that my system says to maximize the expected value of your life satisfaction given you are in this universe but not conditioning on anything else. And the measure of life-satisfaction was stated to be bounded, say to be in the range [0, 1]. Since any agent can only have life satisfaction in [0, 1], then in an infinite universe, the expected value of life satisfaction of the agent must still be in [0, 1]. So, as long as a finite universe doesn’t have expected value of life satisfaction to be 0, then an infinite universe can at most only have finitely more moral value than it.

To say it another way, my ethical system provides a function mapping from possible worlds to their moral value. And this mapping always produces outputs in the range [0, 1]. So, trivially, you can see the no universe can have infinitely more moral value than another universe with non-zero moral value. just isn’t in the domain of my moral value function.

Fanaticism

Another problem in some proposals of infinite ethical systems is that they result in being “fanatical” in efforts to cause or prevent infinite good or bad.

For example, one proposed system of infinite ethics, the extended decision rule, has this problem. Let g represent the statement, “there is an infinite amount of good in the world and only a finite amount of bad”. Let b represent the statement, “there is an infinite amount of bad in the world and only a finite amount of good”. The extended decision rule says to do whatever maximizes P(g) - P(b). If there are ties, ties are broken by choosing whichever action results in the most moral value if the world is finite.

This results in being willing to incur any finite cost to adjust the probability of infinite good and finite bad even very slightly. For example, suppose there is an action that, if done, would increase the probability of infinite good and finite bad by 0.000000000000001%. However, if it turns out that the world is actually finite, it will kill every creature in existence. Then the extended decision rule would recommend doing this. This is the fanaticism problem.

My system doesn’t even place any especially high importance in adjusting the probabilities of infinite good and or infinite bad. Thus, it doesn’t have this problem.

Preserving the spirit of aggregate consequentialism

Aggregate consequentialism is based on certain intuitions, like “morality is about making the world as best as it can be”, and, “don’t arbitrarily ignore possible futures and their values”. But finding a system of infinite ethics that preserves intuitions like these is difficult.

One infinite ethical system, infinity shades, says to simply ignore the possibility that the universe is infinite. However, this conflicts with our intuition about aggregate consequentialism. The big intuitive benefit of aggregate consequentialism is that it’s supposed to actually systematically help the world be a better place in whatever way you can. If we’re completely ignoring the consequences of our actions on anything infinity-related, this doesn’t seem to be respecting the spirit of aggregate consequentialism.

My system, however, does not ignore the possibility of infinite good or bad, and thus is not vulnerable to this problem.

I’ll provide another conflict with the spirit of consequentialism. Another infinite ethical system says to maximize the expected amount of goodness of the causal consequences of your actions minus the amount of badness. However, this, too, doesn’t properly respect the spirit of aggregate consequentialism. The appeal of aggregate consequentialism is that its defines some measure of “goodness” of a universe, and then recommends you take actions to maximize it. But your causal impact is no measure of the goodness of the universe. The total amount of good and bad in the universe would be infinite no matter what finite impact you have. Without providing a metric of the goodness of the universe that’s actually affected, this ethical approach also fails to satisfy the spirit of aggregate consequentialism.

My system avoids this problem by providing such a metric: the expected life satisfaction of an agent that has no idea what situation it will be born into.

Now I’ll discuss another form of conflict. One proposed infinite ethical system can look at the average life satisfaction of a finite sphere of the universe, and then take the limit of this as the sphere’s size approaches infinity, and consider this the moral value of the world. This has the problem that you can adjust the moral value of the world by just rearranging agents. In an infinite universe, it’s possible to come up with a method of re-arranging agents so the unhappy agents are spread arbitrarily thinly. Thus, you can make moral value arbitrarily high by just rearranging agents in the right way.

I’m not sure my system entirely avoids this problem, but it does seem to have substantial defense against it.

Consider you have the option of redistributing agents however you want in the universe. You’re using my ethical system to decide whether to make the unhappy agents spread thinly.

Well, your actions have an effect on agents in circumstances of the form, “An unhappy agent on an Earthlike world with someone who <insert description of yourself> who is considering spreading the unhappy agents thinly throughout the universe”. Well, if you pressed that button, that wouldn’t make the expected life satisfaction of any agent satisfying the above description any better. So I don’t think my ethical system recommends this.

Now, we don’t have a complete understanding of how to assign a probability distribution of what circumstances an agent is in. It’s possible that there is some way to redistribute agents in certain circumstances to change the moral value of the world. However, I don’t know of any clear way to do this. Further, even if there is, my ethical system still doesn’t allow you to get the moral value of the world arbitrarily high by just rearranging agents. This is because there will always be some non-zero probability of having ended up as an unhappy agent in the world you’re in, and your life satisfaction after being redistributed in the universe would still be low.

Distortions

It’s not entirely clear to me how Bostrom distinguished between distortions and violations of the spirit of aggregate consequentialism.

To the best of my knowledge, the only distortion pointed out in “Infinite Ethics” is stated as follows:

Your task is to allocate funding for basic research, and you have to choose between two applications from different groups of physicists. The Oxford Group wants to explore a theory that implies that the world is canonically infinite. The Cambridge Group wants to study a theory that implies that the world is finite. You believe that if you fund the exploration of a theory that turns out to be correct you will achieve more good than if you fund the exploration of a false theory. On the basis of all ordinary considerations, you judge the Oxford application to be slightly stronger. But you use infinity shades. You therefore set aside all possible worlds in which there are infinite values (the possibilities in which the Oxford Group tends to fare best), and decide to fund the Cambridge application. Is this right?

My approach doesn’t ignore infinity and thus doesn’t have this problem. I don’t know of any other distortions in my ethical system.