You should bound your utility function (not just probabilities) on how much information your brain can handle. Your utility function’s dynamic range should never outpace your brain’s probability’s dynamic range. Also you shouldn’t claim to put $Googolpexutility on anything until you’re at least Ω(log(googolplex))[1] seconds old.
Utility functions come from your preferences over lotteries. Not every utility function corresponds to a reasonable preference over lotteries. You can claim “My utility function assigns a value of Chaitin’s constant to this outcome”, but that doesn’t mean you can build a finite agent that follows that utility function (it would be uncomputable). Similarly, you can claim “my agent follows a utility function assigns to outcomes A B and C values of $0, $1, and $googolplex”, but you can’t build such a beast with real physics (you’re implicitly claiming your agent can distinguish between probabilities so fine that no computer with memory made from all the matter in the eventually observable universe could compute it).
And (I claim) almost any probability you talk about should be bounded by O(2^(number of bits you’ve ever seen)). That’s because (I claim) almost all your beliefs are quasi-empirical, even most of the a priori ones. For example, Descartes considered the proposition “The only thing I can be certain of is that I can’t be certain of anything” before quasi-empirically rejecting that proposition in favor of “I think, therefore I am”. Descartes didn’t just know a priori that proposition was false—he had to spend some time computing to gather some (mental) evidence. It’s easy to quickly get probabilities exponentially small by collecting evidence, but you shouldn’t get them more than exponentially small.
You know the joke about the ultrafinitist mathematician who says he doesn’t believe in the set of all integers? A skeptic asks “is 1 an integer?” and the ultrafinitist says “yes”. The skeptic asks “is 2 an integer?”, the ultrafinitist wait’s a bit, then says “yes”. The skeptic asks “is 100 an integer?”, the ultrafinitist waits a bit, waits a bit more, then says “yes”. This continues, with the ultafinitist waiting more and more time before confirming the existence of bigger and bigger integers, so you can never catch him in a contradiction. I think you should do something like that for small probabilities.
Yep! With the addendum that I’m also limiting the utility function by the same sorts of bounds. Eliezer in Pascal’s Muggle (as I interpret him, though I’m putting words in his mouth) was willing to bound agents subjective probabilities, but was not willing to bound agents utility functions.
Why is seconds the relevant unit of measure here?
The real unit is “how many bits of evidence you have seen/computed in your life”. The number of seconds you’ve lived is just something proportional to that—the Big Omega notation fudges away proportionality constant.
You should bound your utility function (not just probabilities) on how much information your brain can handle. Your utility function’s dynamic range should never outpace your brain’s probability’s dynamic range. Also you shouldn’t claim to put $Googolpex utility on anything until you’re at least Ω(log(googolplex))[1] seconds old.
Utility functions come from your preferences over lotteries. Not every utility function corresponds to a reasonable preference over lotteries. You can claim “My utility function assigns a value of Chaitin’s constant to this outcome”, but that doesn’t mean you can build a finite agent that follows that utility function (it would be uncomputable). Similarly, you can claim “my agent follows a utility function assigns to outcomes A B and C values of $0, $1, and $googolplex”, but you can’t build such a beast with real physics (you’re implicitly claiming your agent can distinguish between probabilities so fine that no computer with memory made from all the matter in the eventually observable universe could compute it).
And (I claim) almost any probability you talk about should be bounded by O(2^(number of bits you’ve ever seen)). That’s because (I claim) almost all your beliefs are quasi-empirical, even most of the a priori ones. For example, Descartes considered the proposition “The only thing I can be certain of is that I can’t be certain of anything” before quasi-empirically rejecting that proposition in favor of “I think, therefore I am”. Descartes didn’t just know a priori that proposition was false—he had to spend some time computing to gather some (mental) evidence. It’s easy to quickly get probabilities exponentially small by collecting evidence, but you shouldn’t get them more than exponentially small.
You know the joke about the ultrafinitist mathematician who says he doesn’t believe in the set of all integers? A skeptic asks “is 1 an integer?” and the ultrafinitist says “yes”. The skeptic asks “is 2 an integer?”, the ultrafinitist wait’s a bit, then says “yes”. The skeptic asks “is 100 an integer?”, the ultrafinitist waits a bit, waits a bit more, then says “yes”. This continues, with the ultafinitist waiting more and more time before confirming the existence of bigger and bigger integers, so you can never catch him in a contradiction. I think you should do something like that for small probabilities.
Big Omega notation. “Grows at least that fast, with fudge factor constants”.
Not sure I fully understand this comment, but I think it is similar to option 4 or 6?
Why is seconds the relevant unit of measure here?
Yep! With the addendum that I’m also limiting the utility function by the same sorts of bounds. Eliezer in Pascal’s Muggle (as I interpret him, though I’m putting words in his mouth) was willing to bound agents subjective probabilities, but was not willing to bound agents utility functions.
The real unit is “how many bits of evidence you have seen/computed in your life”. The number of seconds you’ve lived is just something proportional to that—the Big Omega notation fudges away proportionality constant.