And to show this isn’t JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one’s beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say −10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
This entirely misses the point. Yes, you could self modify, but it’s a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one’s beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say −10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
This entirely misses the point. Yes, you could self modify, but it’s a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.