That is quite a lot of knowledge this Martha is supposed to have. If a human, or whatever species this hypothetical being Martha is, had so much knowledge about its own inner workings, would it really still be surprised? That Martha feels like she learned something new is by no means a given fact. We postulate that it would be so, based on the fact that we would think we had learned something new if we were in the same situation. If Martha really knows all this on a quantitative level, who are we to assume that she would still feel like she learned something?
That assumption is based on something we all have in common (our inability to understand ourselves in detail), but that would not be shared by Martha.
There is no way for us to know if such an intelligent and introspective being would actually learn something in this situation. This makes the question pointless if we assume that Martha is omnicient regarding her own psyche. The entire line of reasoning would depend on something which we can not actually know.
For this reason I was working under the assumption that Martha was merely extremely smart, compared to human scientists, but not able to analyze herself in ways we don’t even begin to understand the implications of.
There’s a limit to how much knowledge one can have about one’s own inner workings since that knowledge would itself have inner workings. I think you can argue that nothing below the limit is sufficient to answer that problem.
You might be able to end this recursion problem (“I know about knowing things, but I don’t yet know about knowing about knowing about things, and when I do know, then I’ll have to go and learn to know about knowing about knowing about knowing...”) by eventually reaching a point where each level is so similar to its predecessor that it and all its successors can be described with a quine.
The problem is that unpacking the quine requires computation, and you can experience the feeling of knowing from the results of a computation, e.g. by doing math.
That is quite a lot of knowledge this Martha is supposed to have. If a human, or whatever species this hypothetical being Martha is, had so much knowledge about its own inner workings, would it really still be surprised? That Martha feels like she learned something new is by no means a given fact. We postulate that it would be so, based on the fact that we would think we had learned something new if we were in the same situation. If Martha really knows all this on a quantitative level, who are we to assume that she would still feel like she learned something?
That assumption is based on something we all have in common (our inability to understand ourselves in detail), but that would not be shared by Martha.
There is no way for us to know if such an intelligent and introspective being would actually learn something in this situation. This makes the question pointless if we assume that Martha is omnicient regarding her own psyche. The entire line of reasoning would depend on something which we can not actually know.
For this reason I was working under the assumption that Martha was merely extremely smart, compared to human scientists, but not able to analyze herself in ways we don’t even begin to understand the implications of.
There’s a limit to how much knowledge one can have about one’s own inner workings since that knowledge would itself have inner workings. I think you can argue that nothing below the limit is sufficient to answer that problem.
You might be able to end this recursion problem (“I know about knowing things, but I don’t yet know about knowing about knowing about things, and when I do know, then I’ll have to go and learn to know about knowing about knowing about knowing...”) by eventually reaching a point where each level is so similar to its predecessor that it and all its successors can be described with a quine.
The problem is that unpacking the quine requires computation, and you can experience the feeling of knowing from the results of a computation, e.g. by doing math.