Our human utility functions are bounded whether we like to admit it or not. If they were unbounded, we’d all be broke from paying Pascal muggers. Even eternal suffering has a large but finite negative utility. Consider all the Christians who have believed in eternal torment in hell with probability much higher than 10−1200 but sinned anyway. You’re already much better off than they are (for one thing, nothing you do will make much of a difference to the probability of a mega-torture AI, so even if the idea keeps you up at night, it shouldn’t interfere with your everyday life).
Bounded utilities come from the fact that at a large enough scale, everyone is an average utilitarian, not a total utilitarian, and this average is taken across time, as well as across individuals. Eternal suffering just means that average utility goes to nearly the lowest possible value. Which is very extremely bad, don’t get me wrong. But it’s not infinitely bad.
There’s a fun experiment you can try at home with infinite expected suffering called “the St. Peppersberg game” [1], and it goes like this. You’ll need a coin and a selection of spicy peppers. You start out with 1 point. Start flipping the coin. If you get heads, double your points. The first time you flip tails, the game ends, and you must eat a pepper of spicyness n on the Scoville scale [2], where n is your final score in the game. If you don’t have a pepper with such a rating, then you must eat several peppers whose collective Scoville rating sums to n. Buy more peppers if necessary. If the coin is fair, then a simple calculation shows that the expected value of n is infinity.
Our human utility functions are bounded whether we like to admit it or not. If they were unbounded, we’d all be broke from paying Pascal muggers. Even eternal suffering has a large but finite negative utility. Consider all the Christians who have believed in eternal torment in hell with probability much higher than 10−1200 but sinned anyway. You’re already much better off than they are (for one thing, nothing you do will make much of a difference to the probability of a mega-torture AI, so even if the idea keeps you up at night, it shouldn’t interfere with your everyday life).
Bounded utilities come from the fact that at a large enough scale, everyone is an average utilitarian, not a total utilitarian, and this average is taken across time, as well as across individuals. Eternal suffering just means that average utility goes to nearly the lowest possible value. Which is very extremely bad, don’t get me wrong. But it’s not infinitely bad.
There’s a fun experiment you can try at home with infinite expected suffering called “the St. Peppersberg game” [1], and it goes like this. You’ll need a coin and a selection of spicy peppers. You start out with 1 point. Start flipping the coin. If you get heads, double your points. The first time you flip tails, the game ends, and you must eat a pepper of spicyness n on the Scoville scale [2], where n is your final score in the game. If you don’t have a pepper with such a rating, then you must eat several peppers whose collective Scoville rating sums to n. Buy more peppers if necessary. If the coin is fair, then a simple calculation shows that the expected value of n is infinity.
[1] https://en.wikipedia.org/wiki/St._Petersburg_paradox [2] https://en.wikipedia.org/wiki/Scoville_scale