So it seems an accurate GLUT is too big for us to be able to show it to the subject. Maybe this is one answer to the question.
Very likely; but I was only going to show him the bits predicting his response to his immediate situation + being shown those bits. We can save a bit of paper by not printing out his predicted reaction to chocolate elephants falling from the sky. :)
So maybe we can indeed connect to the halting problem?
It seems to me that your construction relies on the subject having access to the executable. If we don’t give him that access, he cannot use this method of attempted disproof, no matter how contrarian he is.
I was only going to show him the bits predicting his response to his immediate situation + being shown those bits.
OK, but again this simply may not be possible even if you have an accurate GLUT. If you give him anything that lets him compute your true prediction in finite time, then he can compute your true prediction and then do the opposite. Even if we have a complete and accurate GLUT, we can never supply him with a true prediction if the accurate GLUT contains no entries of the form “Subject is told he is predicted to do X” → subject does X.
Well, that’s precisely my point. But see prase’s comment below, with the very interesting point that every sufficiently-nice function f(x) has some x for which f(x)=x. The question is whether the human brain is sufficiently nice.
Very likely; but I was only going to show him the bits predicting his response to his immediate situation + being shown those bits. We can save a bit of paper by not printing out his predicted reaction to chocolate elephants falling from the sky. :)
It seems to me that your construction relies on the subject having access to the executable. If we don’t give him that access, he cannot use this method of attempted disproof, no matter how contrarian he is.
OK, but again this simply may not be possible even if you have an accurate GLUT. If you give him anything that lets him compute your true prediction in finite time, then he can compute your true prediction and then do the opposite. Even if we have a complete and accurate GLUT, we can never supply him with a true prediction if the accurate GLUT contains no entries of the form “Subject is told he is predicted to do X” → subject does X.
Well, that’s precisely my point. But see prase’s comment below, with the very interesting point that every sufficiently-nice function f(x) has some x for which f(x)=x. The question is whether the human brain is sufficiently nice.