Can you give an example of the kind of argument that you are trying to refute here?
The argument:
“a (real-world) agent such as a human or AI cannot self improve because the K-complexity of a closed system is constant”
seems obviously silly. A human or AI is not a closed system, it receives sensory inputs, and no-one would argue that self-improvement has to happen without any sensory inputs.
Aside: I would caution, echoing Eliezer, that theoretical results about what is computable and what isn’t, and about Turing machines with infinite memory and how many 1′s they can output before they halt are hard to apply to the real world without making subtle errors causing you to falsely pump your intuition. The real world problems that we face rely upon finite machines operating in finite time.
Can you give an example of the kind of argument that you are trying to refute here?
The argument:
“a (real-world) agent such as a human or AI cannot self improve because the K-complexity of a closed system is constant”
seems obviously silly. A human or AI is not a closed system, it receives sensory inputs, and no-one would argue that self-improvement has to happen without any sensory inputs.
Aside: I would caution, echoing Eliezer, that theoretical results about what is computable and what isn’t, and about Turing machines with infinite memory and how many 1′s they can output before they halt are hard to apply to the real world without making subtle errors causing you to falsely pump your intuition. The real world problems that we face rely upon finite machines operating in finite time.