The following is a meaningless stream of consciousness.
This issue has often sounded to me a little bit like the problem of building recursive inductive types/propositions in type-theory/logic. You can’t construct so much as a tree with child nodes without some notation for, “This structure contains copies of itself, or possibly even links back to its own self as a cyclical structure.” It continually sounds as if AIXI has no symbol in its hypothesis space that means “me”, and even if it did, it would consider hypotheses about “me” more complex than hypotheses without “me”.
In type theory, we solve this with a \mu type constructor, so that lists, for instance, can be written as follows:
Common sense tells us that the phenomenon of lists is thus described much more simply by one recursive type than by countably infinitely many separate similar types. If we include the “recursive concept operator” \mu in the “language” of hypotheses, my intuitions (ie: blind speculation) say Kolmogorov Complexity would correctly weight the hypotheses.
Are there concept or formal-language learning algorithms capable of inducting this kind of recursive concept description? I would figure such an algorithm would be able to learn some hypothesis along the lines of “the output tape corresponds with this physical process by way of the recursive concept me, which can be unfolded into its bridge hypotheses and physical description or folded into the concept of my calculations”.
Once again, everything above was a meaningless stream of consciousness, but I’m told one of the ways to make progress is to try to write down your best guesses and look at what’s wrong with them.
The diagonal lemma and the existence of quines) already show that you don’t need specific support for self-reference in your language, because any sufficiently powerful language can formulate self-referential statements. In fact, UDT uses a quined description of itself, like in your proposal.
The diagonal lemma and the existence of quines already show that you don’t need specific support for self-reference in your language, because any sufficiently powerful language can formulate self-referential statements.
In formal language terms, it would be more accurate to say that any sufficiently powerful (ie: recursively enumerable, Turing recognizable, etc) language must contain some means of producing direct self-references. The existence of the \mu node in the syntax tree isn’t necessarily intuitive, but its existence is a solid fact of formal-language theory. Without it, you can only express pushdown automata, not Turing machines.
But self-referencing data structures within a single Turing machine tape are not formally equivalent to self-referencing Turing machines, nor to being able to learn how to detect and locate a self-reference in a universe being modelled as a computation.
In fact, UDT uses a quined description of itself, like in your proposal.
I did see someone proposing a UDT attack on naturalized induction on this page.
The following is a meaningless stream of consciousness.
This issue has often sounded to me a little bit like the problem of building recursive inductive types/propositions in type-theory/logic. You can’t construct so much as a tree with child nodes without some notation for, “This structure contains copies of itself, or possibly even links back to its own self as a cyclical structure.” It continually sounds as if AIXI has no symbol in its hypothesis space that means “me”, and even if it did, it would consider hypotheses about “me” more complex than hypotheses without “me”.
In type theory, we solve this with a \mu type constructor, so that lists, for instance, can be written as follows:
Common sense tells us that the phenomenon of lists is thus described much more simply by one recursive type than by countably infinitely many separate similar types. If we include the “recursive concept operator”
\muin the “language” of hypotheses, my intuitions (ie: blind speculation) say Kolmogorov Complexity would correctly weight the hypotheses.Are there concept or formal-language learning algorithms capable of inducting this kind of recursive concept description? I would figure such an algorithm would be able to learn some hypothesis along the lines of “the output tape corresponds with this physical process by way of the recursive concept me, which can be unfolded into its bridge hypotheses and physical description or folded into the concept of my calculations”.
Once again, everything above was a meaningless stream of consciousness, but I’m told one of the ways to make progress is to try to write down your best guesses and look at what’s wrong with them.
The diagonal lemma and the existence of quines) already show that you don’t need specific support for self-reference in your language, because any sufficiently powerful language can formulate self-referential statements. In fact, UDT uses a quined description of itself, like in your proposal.
In formal language terms, it would be more accurate to say that any sufficiently powerful (ie: recursively enumerable, Turing recognizable, etc) language must contain some means of producing direct self-references. The existence of the
\munode in the syntax tree isn’t necessarily intuitive, but its existence is a solid fact of formal-language theory. Without it, you can only express pushdown automata, not Turing machines.But self-referencing data structures within a single Turing machine tape are not formally equivalent to self-referencing Turing machines, nor to being able to learn how to detect and locate a self-reference in a universe being modelled as a computation.
I did see someone proposing a UDT attack on naturalized induction on this page.