The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion. There is no argument so compelling that it will give dynamics to a static thing. There is no computer program so persuasive that you can run it on a rock.
A non-universal Turing machine can’t simulate a universal Turing machine. (If it could, it would be universal after all—a contradiction.) In other words, there are computers that can self-program and those that can’t, and no amount of programming can change the latter into the former.
I think this just begs the question:
Dynamic: When the belief pool contains “X is fuzzle”, send X to the action system.
Dynamic 2: When the belief pool contains “X is fuzzle”, and there is a dynamic saying “When the belief pool contains ‘X is fuzzle’, send X to the action system”, then send X to the action system.
Or, to put it another way:
Dynamic 2: When the belief pool contains “X is fuzzle”, run Dynamic 1.
Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum—and we’re back to the original problem.
I think the real point of the dialogue is that you can’t use rules of inference to derive rules of inference—even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they’re the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can “program” the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program’s description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in—and probably why Hofstadter included this fable.
The question “Is this object a blegg?” may stand in for different queries on different occasions. If it weren’t standing in for some query, you’d have no reason to care.
Basically, this is pragmatism in a nutshell—right?