Frigging awesome. (I haven’t read Permutation City, but have now bumped its to-read status from maybe to definitely.)
Of the characters not already identified, I’m afraid the only ones I recognized are Louis Wu and the Lensman.
I think I have the solution to the problem of how to weight the runtime of programs to produce coherent experiences. (I worked this out as a response to Hume’s problem of induction, because at the time I was studying the problem, I hadn’t yet heard of the Solomonoff prior.)
My solution is this: in a nutshell, if an unknown program outputs a million 0 bits in a row, we want to believe the next bit is more likely to be 0 than 1. Can we do this even if the program is more than a megabit long?
Yes. Most long programs won’t output a million 0 bits in a row. Of those that do, most are not doing so because they contain such a string in a literal print statement. They are doing so because execution got hung up in a small loop. And in that case, it will probably stay in that small loop at least for the moment. So we don’t need a bias in favor of short programs; we can expect our experiences to be orderly with generic weighting.
Frigging awesome. (I haven’t read Permutation City, but have now bumped its to-read status from maybe to definitely.)
Of the characters not already identified, I’m afraid the only ones I recognized are Louis Wu and the Lensman.
I think I have the solution to the problem of how to weight the runtime of programs to produce coherent experiences. (I worked this out as a response to Hume’s problem of induction, because at the time I was studying the problem, I hadn’t yet heard of the Solomonoff prior.)
My solution is this: in a nutshell, if an unknown program outputs a million 0 bits in a row, we want to believe the next bit is more likely to be 0 than 1. Can we do this even if the program is more than a megabit long?
Yes. Most long programs won’t output a million 0 bits in a row. Of those that do, most are not doing so because they contain such a string in a literal print statement. They are doing so because execution got hung up in a small loop. And in that case, it will probably stay in that small loop at least for the moment. So we don’t need a bias in favor of short programs; we can expect our experiences to be orderly with generic weighting.
I missed the Louis Wu reference? Darn.