I kinda get your point here, but it would work better with specific examples, including some non-trivial toy examples of failures for superintelligent agents.
It seems like the Solomonoff induction example is illustrative; what do you think it doesn’t cover?
I kinda get your point here, but it would work better with specific examples, including some non-trivial toy examples of failures for superintelligent agents.
It seems like the Solomonoff induction example is illustrative; what do you think it doesn’t cover?