Value Uncertainty and the Singleton Scenario

In Jan­uary of last year, Nick Bostrom wrote a post on Over­com­ing Bias about his and Toby Ord’s pro­posed method of han­dling moral un­cer­tainty. To ab­stract away a bit from their spe­cific pro­posal, the gen­eral ap­proach was to con­vert a prob­lem in­volv­ing moral un­cer­tainty into a game of ne­go­ti­a­tion, with each player’s bar­gain­ing power de­ter­mined by one’s con­fi­dence in the moral philos­o­phy rep­re­sented by that player.

Robin Han­son sug­gested in his com­ments to Nick’s post that moral un­cer­tainty should be han­dled the same way we’re sup­posed to han­dle or­di­nary un­cer­tainty, by us­ing stan­dard de­ci­sion the­ory (i.e., ex­pected util­ity max­i­miza­tion). Nick’s re­ply was that many eth­i­cal sys­tems don’t fit into the stan­dard de­ci­sion the­ory frame­work, so it’s hard to see how to com­bine them that way.

In this post, I sug­gest we look into the seem­ingly eas­ier prob­lem of value un­cer­tainty, in which we fix a con­se­quen­tial­ist eth­i­cal sys­tem, and just try to deal with un­cer­tainty about val­ues (i.e., util­ity func­tion). Value un­cer­tainty can be con­sid­ered a spe­cial case of moral un­cer­tainty in which there is no ap­par­ent ob­sta­cle to ap­ply­ing Robin’s sug­ges­tion. I’ll con­sider a spe­cific ex­am­ple of a de­ci­sion prob­lem in­volv­ing value un­cer­tainty, and work out how Nick and Toby’s ne­go­ti­a­tion ap­proach differs in its treat­ment of the prob­lem from stan­dard de­ci­sion the­ory. Be­sides show­ing the differ­ence in the ap­proaches, I think the spe­cific prob­lem is also quite im­por­tant in its own right.

The prob­lem I want to con­sider is, sup­pose we be­lieve that a sin­gle­ton sce­nario is very un­likely, but may have very high util­ity if it were re­al­ized, should we fo­cus most of our at­ten­tion and effort into try­ing to in­crease its prob­a­bil­ity and/​or im­prove its out­come? The main is­sue here is (putting aside un­cer­tainty about what will hap­pen af­ter a sin­gle­ton sce­nario is re­al­ized) un­cer­tainty about how much we value what is likely to hap­pen.

Let’s say there is a 1% chance that a sin­gle­ton sce­nario does oc­cur, and con­di­tional on it, you will have ex­pected util­ity that is equiv­a­lent to a 1 in 5 billion chance of con­trol­ling the en­tire uni­verse. If a sin­gle­ton sce­nario does not oc­cur, you will have a 15 billionth share of the re­sources of the so­lar sys­tem, and the rest of the uni­verse will be taken over by be­ings like the ones de­scribed in Robin’s The Ra­pa­cious Hard­scrap­ple Fron­tier. There are two pro­jects that you can work on. Pro­ject A in­creases the prob­a­bil­ity of a sin­gle­ton sce­nario to 1.001%. Pro­ject B in­creases the wealth you will have in the non-sin­gle­ton sce­nario by a fac­tor of a mil­lion (so you’ll have a 15 thou­sandth share of the so­lar sys­tem). The de­ci­sion you have to make is which pro­ject to work on. (The num­bers I picked are meant to be stacked in fa­vor of pro­ject B.)

Un­for­tu­nately, you’re not sure how much util­ity to as­sign to these sce­nar­ios. Let’s say that you think there is a 99% prob­a­bil­ity that your util­ity (U1) scales log­a­r­ith­mi­cally with the amount of ne­gen­tropy you will have con­trol over, and 1% prob­a­bil­ity that your util­ity (U2) scales as the square root of ne­gen­tropy. (I as­sume that you’re an eth­i­cal ego­ist and do not care much about what other peo­ple do with their re­sources. And these num­bers are again de­liber­ately stacked in fa­vor of pro­ject B, since the bet­ter your util­ity func­tion scales, the more at­trac­tive pro­ject A is.)

Let’s com­pute the ex­pected U1 and U2 of Pro­ject A and Pro­ject B. Let NU=10120 be the ne­gen­tropy (in bits) of the uni­verse, and NS=1077 be the ne­gen­tropy of the so­lar sys­tem, then:

  • EU1(sta­tus quo) = .01 * log(NU)/​5e9 + .99 * log(NS/​5e9)

  • EU1(A) = .01001 * log(NU)/​5e9 + .98999 * log(NS/​5e9) ≈ 66.6

  • EU1(B) = .01 * log(NU)/​5e9 + .99 * log(NS/​5e3) ≈ 72.6

EU2 is com­puted similarly, ex­cept with log re­placed by sqrt:

  • EU2(A) = .01001 * sqrt(NU)/​5e9 + .98999 * sqrt(NS/​5e9) ≈ 2.002e48

  • EU2(B) = .01 * sqrt(NU)/​5e9 + .99 * sqrt(NS/​5e3) ≈ 2.000e48

Un­der Robin’s ap­proach to value un­cer­tainty, we would (I pre­sume) com­bine these two util­ity func­tions into one lin­early, by weigh­ing each with its prob­a­bil­ity, so we get EU(x) = 0.99 EU1(x) + 0.01 EU2(x):

  • EU(A) ≈ 0.99 * 66.6 + 0.01 * 2.002e48 ≈ 2.002e46

  • EU(B) ≈ 0.99 * 72.6 + 0.01 * 2.000e48 ≈ 2.000e46

This sug­gests that we should fo­cus our at­ten­tion and efforts on the sin­gle­ton sce­nario. In fact, even if Pro­ject A had a much, much smaller prob­a­bil­ity of suc­cess, like 10-30 in­stead of 0.00001, or you have a much lower con­fi­dence that your util­ity scales as well as the square root of ne­gen­tropy, it would still be the case that EU(A)>EU(B). (This is con­trary to Robin’s po­si­tion that we pay too much at­ten­tion to the sin­gle­ton sce­nario, and I would be in­ter­ested to know in which de­tail his calcu­la­tion differs from mine.)

What about Nick and Toby’s ap­proach? In their scheme, del­e­gate 1, rep­re­sent­ing U1, would vote for pro­ject B, while del­e­gate 2, rep­re­sent­ing U2, would vote for pro­ject A. Since del­e­gate 1 has 99 votes to del­e­gate 2’s one vote, the ob­vi­ous out­come is that we should work on pro­ject B. The de­tails of the ne­go­ti­a­tion pro­cess don’t seem to mat­ter much, given the large ad­van­tage in bar­gain­ing power that del­e­gate 1 has over del­e­gate 2.

Each of these ap­proaches to value un­cer­tainty seems in­tu­itively at­trac­tive on its own, but to­gether they give con­flict­ing ad­vice on this im­por­tant prac­ti­cal prob­lem. Which is the right ap­proach, or is there a bet­ter third choice? I think this is per­haps one of the most im­por­tant open ques­tions that an as­piring ra­tio­nal­ist can work on.