There exists an irrational number which is 100 minus delta where delta is infinitesimally small. In my celestial language we call it “Bob”. I choose Bob. Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value, and who therefore goes out to spend his utility on blackjack and hookers displays greater rationality than the person who does not.
Seriously, though, isn’t this more of an infinity paradox rather than an indictment on perfect rationality? There are areas where the ability to mathematically calculate breaks down, ie naked singularities, Uncertainty Principle, as well as infinity. Isn’t this more the issue at hand: that we can’t be perfectly rational where we can’t calculate precisely?
Maybe it’s just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can’t find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom’s reasoning, that beings in a simulation other than our own can be seen as morally equal to us). But why value the simulated self over the simulated other? I accept that I can care in a blackmail situation where I might unknowingly be one of the simulations (ala Dr Evil or the AI boxes me), but that’s not the same as inherently caring about (or having nightmares about) what may happen to a simulated version of me in the past, present, or future.
Any thoughts on why thou shalt love thy simulation as thyself?