AIXI only includes all prediction models that are 100% accurate. I don’t think the human is capable of coming up with 100% accurate predictions.
Thought: The human can’t make predictions at all about the black box, but he can use it to predict the outcomes of various computable processes. AIXI can already predict the outcomes of computable processes, and doesn’t need the black box.
I know this post is long, long dead but:
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
Alternatively, I’m considering all the strategies I could use, based on looking at my opponent’s strategy, and one of them is “Cooperate only if the opponent, when playing against himself, would defect.”
“Common knowledge of each other’s rationality” doesn’t seem to help. Knowing I use TDT doesn’t give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can’t look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win. Rationalists should win.