This is probably more contentious. But I believe that the concept of “intelligence” is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any “embodied intelligence”.
I would rather stress two key properties of an algorithm: the quality of the algorithm’s world model and its (long-term) planning capabilities. It seems to me (but maybe I’m wrong) that “embodied intelligence” is not very relevant to world model inference and planning capabilities.
Typically, Legg-Hutter intelligence does not seem to require any “embodied intelligence”.
Don’t make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.