Logical induction doesn’t have interesting guarantees in reinforcement learning, and doesn’t reproduce UDT in any non-trivial way. It just doesn’t solve the problems infra-Bayesianism sets out to solve.
Logical induction will consider a sufficiently good pseudorandom algorithm as being random.
A pseudorandom sequence is (by definition) indistinguishable from random by any cheap algorithm, not only logical induction, including a bounded infra-Bayesian.
If it understands most of reality, but not some fundamental particle, it will assume that the particle is behaving in an adversarial manor.
No. Infra-Bayesian agents have priors over infra-hypotheses. They don’t start with complete Knightian uncertainty over everything and gradually reduce it. The Knightian uncertainty might “grow” or “shrink” as a result of the updates.
Logical induction doesn’t have interesting guarantees in reinforcement learning, and doesn’t reproduce UDT in any non-trivial way. It just doesn’t solve the problems infra-Bayesianism sets out to solve.
A pseudorandom sequence is (by definition) indistinguishable from random by any cheap algorithm, not only logical induction, including a bounded infra-Bayesian.
No. Infra-Bayesian agents have priors over infra-hypotheses. They don’t start with complete Knightian uncertainty over everything and gradually reduce it. The Knightian uncertainty might “grow” or “shrink” as a result of the updates.