The “tiling agents” issue mostly wasn’t relevant to this article, but that is the goal of getting around Lob’s Theorem. If and only if you can get around Lob’s Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.
Otherwise, you can’t prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don’t want to replace myself with an agent who will reason that the sky is green when I’m quite sure it’s blue).
The “tiling agents” issue mostly wasn’t relevant to this article, but that is the goal of getting around Lob’s Theorem. If and only if you can get around Lob’s Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.
Otherwise, you can’t prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don’t want to replace myself with an agent who will reason that the sky is green when I’m quite sure it’s blue).