Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it’s multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.
One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn’t seem like a good description of psychology (b) leads to total loss of that preference in smarter minds.
(B) If the value I assign to the momentary irritation of a dust speck is less than 1/3^^^3 the value of 50 years’ torture, then I will never even bother to blink away the dust speck because I could spend the thought or the muscular movement on my eye on something with a better than 1/3^^^3 chance of saving someone from torture.
(A) People often also think that money, a mundane value, is incommensurate with human life, a sacred value, even though they very definitely don’t attach infinitesimal value to money.
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility, than anyone actually possessing a consistent utility function using hyperreal infinitesimal numbers.
The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and “Can’t Say No” spending.
I’m not sure why surreal/hyperreal numbers result in, essentially, monofocus.
Consider this scale on the surreals:
Omega^2: Utility of universal immortality; dis-utility of an existential risk. Omega utility for potentially omega people.
Omega: Utility of a human life.
1: One traditional utilon.
Epsilon: Dust speck in your eye.
Let’s say you’re a perfectly rational human (*cough cough*). You naturally start on the Omega^2 scale, with a certain finite amount of resources. Clearly, the worth of an omega of human lives is worth more than your own, so you do not repeat do not promptly donate them all to MIRI.
At least, not until you first calculate the approximate probability that your independent existence will make it more likely that someone somewhere will finally defeat death. Even if you have not the intelligence to do it yourself, or the social skills to keep someone else stable while they attack it, there’s still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep yourself psychologically and physiologically sound and then donate the rest to MIRI.
This is, essentially, the “sanity” term. Most of the calculation is done at this step, but because your life, across your lifespan, has some chance of solving death, you are not morally obligated to have yourself processed into Soylent Green.
This step interrupts for one of three reasons. One, you have reached a point where spending further resources, either on yourself or some existential-risk organization, does not predictably affect an existential risk. Two, all existential risks are dealt with, and death itself has died. (Yay!) Three, part of ensuring your own psychological soundness requires it—really, this just represents the fact that sometimes, a dollar (approx. one utilon) or a speck (epsilon utilons) can result in your death or significant misery, but nevertheless such concerns should still be resolved in order of decreasing utility.
At this point, we break to the Omega step, which works much the same way, balancing charity donations against your own life and QoL. Situations where spending money can save lives—say, a hospital or a charity—should be evaluated at this step.
Then we break to the unitary step, which is essentially entirely QoL for yourself or others.
Hypothetically, we might then break to the epsilon step—in practice, since even in a post-scarcity society you will never finish optimizing your unitaries, this step is only evaluated when it or something in it is promoted by causal dependence to a higher step.
So, returning to the original problem: Barring all other considerations, 3^^^3*epsilon is still an epsilon, while 50 years of torture is probably something like 3⁄4 Omega. With two tiers of difference, the result is obvious, and has been resolved with intuition.
I’m going to conclude with something Hermione says in MoR, that I think applies here.
“But the thing that people forget sometimes, is that even though appearances can be misleading, they’re usually not.”
Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it’s multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.
One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn’t seem like a good description of psychology (b) leads to total loss of that preference in smarter minds.
(B) If the value I assign to the momentary irritation of a dust speck is less than 1/3^^^3 the value of 50 years’ torture, then I will never even bother to blink away the dust speck because I could spend the thought or the muscular movement on my eye on something with a better than 1/3^^^3 chance of saving someone from torture.
(A) People often also think that money, a mundane value, is incommensurate with human life, a sacred value, even though they very definitely don’t attach infinitesimal value to money.
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility, than anyone actually possessing a consistent utility function using hyperreal infinitesimal numbers.
The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and “Can’t Say No” spending.
I’m not sure why surreal/hyperreal numbers result in, essentially, monofocus.
Consider this scale on the surreals:
Omega^2: Utility of universal immortality; dis-utility of an existential risk. Omega utility for potentially omega people.
Omega: Utility of a human life.
1: One traditional utilon.
Epsilon: Dust speck in your eye.
Let’s say you’re a perfectly rational human (*cough cough*). You naturally start on the Omega^2 scale, with a certain finite amount of resources. Clearly, the worth of an omega of human lives is worth more than your own, so you do not repeat do not promptly donate them all to MIRI.
At least, not until you first calculate the approximate probability that your independent existence will make it more likely that someone somewhere will finally defeat death. Even if you have not the intelligence to do it yourself, or the social skills to keep someone else stable while they attack it, there’s still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep yourself psychologically and physiologically sound and then donate the rest to MIRI.
This is, essentially, the “sanity” term. Most of the calculation is done at this step, but because your life, across your lifespan, has some chance of solving death, you are not morally obligated to have yourself processed into Soylent Green.
This step interrupts for one of three reasons. One, you have reached a point where spending further resources, either on yourself or some existential-risk organization, does not predictably affect an existential risk. Two, all existential risks are dealt with, and death itself has died. (Yay!) Three, part of ensuring your own psychological soundness requires it—really, this just represents the fact that sometimes, a dollar (approx. one utilon) or a speck (epsilon utilons) can result in your death or significant misery, but nevertheless such concerns should still be resolved in order of decreasing utility.
At this point, we break to the Omega step, which works much the same way, balancing charity donations against your own life and QoL. Situations where spending money can save lives—say, a hospital or a charity—should be evaluated at this step.
Then we break to the unitary step, which is essentially entirely QoL for yourself or others.
Hypothetically, we might then break to the epsilon step—in practice, since even in a post-scarcity society you will never finish optimizing your unitaries, this step is only evaluated when it or something in it is promoted by causal dependence to a higher step.
So, returning to the original problem: Barring all other considerations, 3^^^3*epsilon is still an epsilon, while 50 years of torture is probably something like 3⁄4 Omega. With two tiers of difference, the result is obvious, and has been resolved with intuition.
I’m going to conclude with something Hermione says in MoR, that I think applies here.