This sweeps some of the essential problems under the rug; if you formalize it a bit more, you’ll see them.
It’s not an artificial restriction, for instance, that a Solomonoff Induction oracle machine doesn’t include things like itself in its own hypothesis class, since the question of “whether a given oracle machine matches the observed data” is a question that sometimes cannot be answered by an oracle machine of equivalent power. (There are bounded versions of this obstacle as well.)
Now, there are some ways around this problem (all of them, so far as I know, found by MIRI): modal agents, reflective oracle machines and logical inductors manage to reason about hypothesis classes that include objects like themselves. Outside of MIRI, people working on multiagent systems make do with agents that each assume the other is smaller/simpler/less meta than itself (so at least one of those agents is going to be wrong).
But this entire problem is hidden in your assertion that the agent, which is a Turing machine, “models the entire wrold, including the agent it self, as one unknown, output only Turing machine”. The only way to find the other problems swept under the rug here is to formalize or otherwise unpack your proposal.
This sweeps some of the essential problems under the rug; if you formalize it a bit more, you’ll see them.
It’s not an artificial restriction, for instance, that a Solomonoff Induction oracle machine doesn’t include things like itself in its own hypothesis class, since the question of “whether a given oracle machine matches the observed data” is a question that sometimes cannot be answered by an oracle machine of equivalent power. (There are bounded versions of this obstacle as well.)
Now, there are some ways around this problem (all of them, so far as I know, found by MIRI): modal agents, reflective oracle machines and logical inductors manage to reason about hypothesis classes that include objects like themselves. Outside of MIRI, people working on multiagent systems make do with agents that each assume the other is smaller/simpler/less meta than itself (so at least one of those agents is going to be wrong).
But this entire problem is hidden in your assertion that the agent, which is a Turing machine, “models the entire wrold, including the agent it self, as one unknown, output only Turing machine”. The only way to find the other problems swept under the rug here is to formalize or otherwise unpack your proposal.