Yet you might not know the question. “28” only certifies that the question makes a true statement.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
Yet you might not know the question. “28” only certifies that the question makes a true statement.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).