“Give me five dollars, and I will use my outside-the-Matrix powers to make your wildest dreams come true, including living for 3^^^3 years of eudaimonic existence and, yes, even telling you about the true substrate of existence. Hey, I’ll top it off and let you out of the box, if only you decide to give me five of your simulated dollars.”
For your kinds of arguments to work, it seems that there must be nothing that the mugger could possibly promise or threaten to do and that, if it came true, you would rate as making a difference of 3^^^3 utils (where declining the offer and continuing your normal life is 0 utils, and giving five dollars to a jokester is −5 utils). It seems like a minor variation on the arguments in Eliezer’s original post to say that if your utility function does assign utilities differing by 3^^^3 to some scenarios, then it seems extremely unlikely that the probability of each of these coming true will balance out just so that the expected utility of paying the mugger is always smaller than zero, no matter what the mugger promises or threatens. If your utility function doesn’t assign utilities so great or small to any outcome, then you have a bounded utility function, which is one of the standard answers to the Mugging.
My own current position is that perhaps I really do have a bounded utility function. If it were only the mugging, I would perhaps still hold out more hope for a satisfactory solution that doesn’t involve bounded utility, but there’s also the unpalatability of having to prefer a (googolplex-1)/googolplex chance of everyone being tortured for a thousand years and all life in the multiverse ending after that + a 1/googolplex chance of 4^^^^4 years of positive post-singularity humanity to the certainty of a mere 3^^^3 years of positive post-singularity humanity, given that 3^^^3 years is far more than enough to cycle through every possible configuration of a present-day human brain. Yes, having more space to expand into post-singularity is always better than less, but is it really that much better?
(ObNote: In order not too make this seem one-sided, I should also mention the standard counter to that, especially since it was a real eye-opener the first time I read Eliezer explain it—namely, with G := googolplex, I would then also have to accept that I’d prefer a 1/G chance of living (G + 1) years + a (G-1)/G chance of living 3^^^3 years to a 1/G chance of living G years + a (G-1)/G chance of living 4^^^^4 years—in other words, I’ll prefer a near-certainty of an unimaginably smaller existence, if I get for that a miniscule increase of existence in a scenario that only has a miniscule chance of happening in the first place. But I’ve started to think that, perhaps, the unimaginably large difference between these lifetimes possibly really might be that unimportant, given that I can cycle through all of current-brain-size human mindspace many times in a mere 3^^^3 years, and given the also-unpalatable conclusions from the unbounded utility function.)
“Give me five dollars, and I will use my outside-the-Matrix powers to make your wildest dreams come true, including living for 3^^^3 years of eudaimonic existence and, yes, even telling you about the true substrate of existence. Hey, I’ll top it off and let you out of the box, if only you decide to give me five of your simulated dollars.”
For your kinds of arguments to work, it seems that there must be nothing that the mugger could possibly promise or threaten to do and that, if it came true, you would rate as making a difference of 3^^^3 utils (where declining the offer and continuing your normal life is 0 utils, and giving five dollars to a jokester is −5 utils). It seems like a minor variation on the arguments in Eliezer’s original post to say that if your utility function does assign utilities differing by 3^^^3 to some scenarios, then it seems extremely unlikely that the probability of each of these coming true will balance out just so that the expected utility of paying the mugger is always smaller than zero, no matter what the mugger promises or threatens. If your utility function doesn’t assign utilities so great or small to any outcome, then you have a bounded utility function, which is one of the standard answers to the Mugging.
My own current position is that perhaps I really do have a bounded utility function. If it were only the mugging, I would perhaps still hold out more hope for a satisfactory solution that doesn’t involve bounded utility, but there’s also the unpalatability of having to prefer a (googolplex-1)/googolplex chance of everyone being tortured for a thousand years and all life in the multiverse ending after that + a 1/googolplex chance of 4^^^^4 years of positive post-singularity humanity to the certainty of a mere 3^^^3 years of positive post-singularity humanity, given that 3^^^3 years is far more than enough to cycle through every possible configuration of a present-day human brain. Yes, having more space to expand into post-singularity is always better than less, but is it really that much better?
(ObNote: In order not too make this seem one-sided, I should also mention the standard counter to that, especially since it was a real eye-opener the first time I read Eliezer explain it—namely, with G := googolplex, I would then also have to accept that I’d prefer a 1/G chance of living (G + 1) years + a (G-1)/G chance of living 3^^^3 years to a 1/G chance of living G years + a (G-1)/G chance of living 4^^^^4 years—in other words, I’ll prefer a near-certainty of an unimaginably smaller existence, if I get for that a miniscule increase of existence in a scenario that only has a miniscule chance of happening in the first place. But I’ve started to think that, perhaps, the unimaginably large difference between these lifetimes possibly really might be that unimportant, given that I can cycle through all of current-brain-size human mindspace many times in a mere 3^^^3 years, and given the also-unpalatable conclusions from the unbounded utility function.)