Humans are (can be represented by) turing machines. All halting turing machines are incorporated in AIXI. Therefore, anything that humans can do to more effectively predict something than a “mere machine” is already incorporated into AIXI.
More generally, anything you represent symbolically can be represented using binary strings. That’s how that string you wrote got to me in the first place. You converted the turing operations in your head into a string of symbols, a computer turned that into a string of digits, my computer turned it back into symbols and my brain used computable algorithms to make sense of them. What makes you think that any of this is impossible for AIXI?
Am I going crazy, or did you just basically repeat what Eliezer, Cyan, and Nesov said without addressing my point?
Do you guys think that you understand my argument and that it’s wrong, or that it’s too confusing and I need to formulate it better, or what? Everyone just seems to be ignoring it and repeating the standard party line....
ETA: Now reading the second part of your comment, which was added after my response.
ETA2: Clearly I underestimated the inferential distance here, but I thought at least Eliezer and Nesov would get it, since they appear to understand the other part of my argument about the universal prior being wrong for decision making, and this seems to be a short step. I’ll try to figure out how to explain it better.
If 4 people all think you’re wrong for the same reason, either you’re wrong or you’re not explaining yourself. You seem to disbelieve the first, so try harder with the explaining.
Well, people expect him to be making good points, even when they don’t understand him (ie, I don’t understand UDT fully, but it seems to be important). Also, he’s advocating further thinking, which is popular around here.
Well, people expect him to be making good points, even when they don’t understand him
And I really, really wish people would stop doing that, whether it’s for Wei_Dai or anyone else you deem to be smart.
Folks, you may think you’re doing us all a favor by voting someone up because they’re smart, but that policy has the effect of creating an information cascade, because it makes an inference bounce back, accumulating arbitrarily high support irrespective of its relationship to reality.
The content of a post or comment should screen off any other information about its value [1], including who made it.
[1] except in obvious cases like when someone is confirming that something is true about that person specifically
Please only vote up posts you both understand and approve of.
I agree, but would like to point out that I don’t see any evidence that people aren’t already doing this. As far as I can tell, Lucas was only speculating that people voted up my post based on the author. Several other of myrecentposts have fairly low scores, for example. (All of them advocated further thinking as well, so I don’t think that’s it either.)
In the limit, even if that one human is the only thing in all of the hypotheses that AIXI has under consideration, AIXI will be predicting precisely as that human does.
Humans are (can be represented by) turing machines. All halting turing machines are incorporated in AIXI. Therefore, anything that humans can do to more effectively predict something than a “mere machine” is already incorporated into AIXI.
More generally, anything you represent symbolically can be represented using binary strings. That’s how that string you wrote got to me in the first place. You converted the turing operations in your head into a string of symbols, a computer turned that into a string of digits, my computer turned it back into symbols and my brain used computable algorithms to make sense of them. What makes you think that any of this is impossible for AIXI?
Am I going crazy, or did you just basically repeat what Eliezer, Cyan, and Nesov said without addressing my point?
Do you guys think that you understand my argument and that it’s wrong, or that it’s too confusing and I need to formulate it better, or what? Everyone just seems to be ignoring it and repeating the standard party line....
ETA: Now reading the second part of your comment, which was added after my response.
ETA2: Clearly I underestimated the inferential distance here, but I thought at least Eliezer and Nesov would get it, since they appear to understand the other part of my argument about the universal prior being wrong for decision making, and this seems to be a short step. I’ll try to figure out how to explain it better.
If 4 people all think you’re wrong for the same reason, either you’re wrong or you’re not explaining yourself. You seem to disbelieve the first, so try harder with the explaining.
Didn’t stop 23+ people from voting up his article … (21 now; I and someone else voted it down)
Well, people expect him to be making good points, even when they don’t understand him (ie, I don’t understand UDT fully, but it seems to be important). Also, he’s advocating further thinking, which is popular around here.
And I really, really wish people would stop doing that, whether it’s for Wei_Dai or anyone else you deem to be smart.
Folks, you may think you’re doing us all a favor by voting someone up because they’re smart, but that policy has the effect of creating an information cascade, because it makes an inference bounce back, accumulating arbitrarily high support irrespective of its relationship to reality.
The content of a post or comment should screen off any other information about its value [1], including who made it.
[1] except in obvious cases like when someone is confirming that something is true about that person specifically
Seconded. Please only vote up posts you both understand and approve of.
I agree, but would like to point out that I don’t see any evidence that people aren’t already doing this. As far as I can tell, Lucas was only speculating that people voted up my post based on the author. Several other of my recent posts have fairly low scores, for example. (All of them advocated further thinking as well, so I don’t think that’s it either.)
The fact that AIXI can predict that a human would predict certain things, does not mean that AIXI can agree with those predictions.
In the limit, even if that one human is the only thing in all of the hypotheses that AIXI has under consideration, AIXI will be predicting precisely as that human does.