1) Actually, the current version of UDT that I write down as an equation involves maximizing over maps from sensory sequences to actions. If there’s a version of UDT that maximizes over something else, let me know.
2) We could say that it ought to be obvious to the math intuition module that choosing a map R := S->A ought to logically imply that R^ = S^->A for simple isomorphisms over sensory experience for isomorphic reductive hypotheses, thereby eliminating a possible degree of freedom in the bridging laws. I agree in principle. We don’t actually have that math intuition module. This is a problem with all logical decision theories, yes, but that is a problem.
Regarding an abstract solution to logical uncertainty, I think the solution given in http://lesswrong.com/lw/imz/notes_on_logical_priors_from_the_miri_workshop/ (which I use in my own post) is not bad. It still runs into the Loebian obstacle. I think I have a solution for that as well, going to write about it soon. Regarding something that can be implemented within reasonable computing resource constraints, well, see below...
3) Aspects of the problem like “What prior space of universes?” aren’t solved by saying “UDT”. Nor, “How exactly do you identify processes computationally isomorphic to yourself inside that universe?” Nor, “How do you manipulate a map which is smaller than the territory where you don’t reason about objects by simulating out the actual atoms?” Nor very much of, “How do I modify myself given that I’m made of parts?”
The prior space of universes is covered: unsurprisingly it’s the Solomonoff prior (over abstract sequences of bits representing the universe, not over sensory data). Regarding the other stuff, my formalism doesn’t give an explicit solution (since I can’t explicitly write the optimal program of given length). However, the function I suggest to maximize already takes everything into account, including restricted computing resources.
My version of UDT (http://lesswrong.com/r/discussion/lw/jub/updateless_intelligence_metrics_in_the_multiverse/) maximizes over programs written for a given abstract “robot” (universal Turing machine + input channels).
Regarding an abstract solution to logical uncertainty, I think the solution given in http://lesswrong.com/lw/imz/notes_on_logical_priors_from_the_miri_workshop/ (which I use in my own post) is not bad. It still runs into the Loebian obstacle. I think I have a solution for that as well, going to write about it soon. Regarding something that can be implemented within reasonable computing resource constraints, well, see below...
The prior space of universes is covered: unsurprisingly it’s the Solomonoff prior (over abstract sequences of bits representing the universe, not over sensory data). Regarding the other stuff, my formalism doesn’t give an explicit solution (since I can’t explicitly write the optimal program of given length). However, the function I suggest to maximize already takes everything into account, including restricted computing resources.