I now suspect that there is a pretty real and non-vacuous sense in which deep learning is approximated Solomonoff induction.
Even granting that, do you think the same applies to the cognition of an AI created using deep learning—is it approximating Solomonoff induction when presented with a new problem at inference time?
I think it’s not, for reasons like the ones in aysja’s comment.
Even granting that, do you think the same applies to the cognition of an AI created using deep learning—is it approximating Solomonoff induction when presented with a new problem at inference time?
I think it’s not, for reasons like the ones in aysja’s comment.
Yes. I think this may apply to basically all somewhat general minds.