Aren’t there programs that run fast and also return a number that grows much faster than |p|? Like up arrow notation. Why don’t these grow faster than your speed prior penalizes them?
I think the upper bound here is set by a program “walking” along the tape as far as possible while setting the tape to 1 and then setting a list bit before halting (thus creating the binary number 11ⁿ where n≤BB(|p|)[1]). If we interpret that number as a utility, the utility is exponential in the number of steps taken, which is why we need to penalize by 2−steps(p) instead of just 1/steps(p)[2]. If you want to write 3↑↑↑3 on the tape you have to make at least log2(3↑↑↑3) steps on a binary tape (and logn(3↑↑↑3) on an n-ary tape).
Makes sense, but in that case, why penalize by time? Why not just directly penalize by utility? Like the leverage prior.
Huh. I find the post confusingly presented, but if I understand correctly, 15 logical inductor points to Yudkowsky₂₀₁₃—I think I invented the same concept from second principles.
Let me summarize to understand: My speed prior on both the hypotheses and the utility functions is trying to emulate just discounting utility directly (because in the case of binary tapes and integers penalizing both for the exponential of speed gets you exactly an upper bound for the utility), and a cleaner way is to set the prior to 2−|p|⋅1U(eval(p)). That avoids the “how do we encode numbers” question that naturally raises itself.
Does that sound right?
(The fact that I reinvented this looks like a good thing, since that indicates it’s a natural way out of the dilemma.)
Can’t give a confident yes because I’m pretty confused about this topic, and I’m pretty unhappy currently with the way the leverage prior mixes up action and epistemics. The issue about discounting theories of physics if they imply high leverage seems really bad? I don’t understand whether the UDASSA thing fixes this. But yes.
That avoids the “how do we encode numbers” question that naturally raises itself.
I’m not sure how natural the encoding question is, there’s probably an AIT answer to this kind of question that I don’t know.
Aren’t there programs that run fast and also return a number that grows much faster than |p|? Like up arrow notation. Why don’t these grow faster than your speed prior penalizes them?
I think the upper bound here is set by a program “walking” along the tape as far as possible while setting the tape to 1 and then setting a list bit before halting (thus creating the binary number 11ⁿ where n≤BB(|p|)[1]). If we interpret that number as a utility, the utility is exponential in the number of steps taken, which is why we need to penalize by 2−steps(p) instead of just 1/steps(p)[2]. If you want to write 3↑↑↑3 on the tape you have to make at least log2(3↑↑↑3) steps on a binary tape (and logn(3↑↑↑3) on an n-ary tape).
Technically the upper bound is Σ(|p|), the score function.
Thanks to GPT-5 for this point.
Makes sense, but in that case, why penalize by time? Why not just directly penalize by utility? Like the leverage prior.
Also, why not allow floating point representations of utility to be output? Rather than just binary integers?
Huh. I find the post confusingly presented, but if I understand correctly, 15 logical inductor points to Yudkowsky₂₀₁₃—I think I invented the same concept from second principles.
Let me summarize to understand: My speed prior on both the hypotheses and the utility functions is trying to emulate just discounting utility directly (because in the case of binary tapes and integers penalizing both for the exponential of speed gets you exactly an upper bound for the utility), and a cleaner way is to set the prior to 2−|p|⋅1U(eval(p)). That avoids the “how do we encode numbers” question that naturally raises itself.
Does that sound right?
(The fact that I reinvented this looks like a good thing, since that indicates it’s a natural way out of the dilemma.)
Can’t give a confident yes because I’m pretty confused about this topic, and I’m pretty unhappy currently with the way the leverage prior mixes up action and epistemics. The issue about discounting theories of physics if they imply high leverage seems really bad? I don’t understand whether the UDASSA thing fixes this. But yes.
I’m not sure how natural the encoding question is, there’s probably an AIT answer to this kind of question that I don’t know.