Even more than an explanation, I would appreciate an explanation on the LessWrong Wiki because there currently isn’t one!
What kind of explanation are you looking for, though? The best explanation of UDT I can currently give, without some sort of additional information about where you find it confusing or how it should be improved, is in my first post about it, Towards a New Decision Theory.
Only as an intuition pump; when it’s time to get down to brass tacks I’m much happier to talk about a well-specified program than a poorly-specified human.
Ah, ok. Some people (such as Ilya Shpitser) do seem to be thinking mostly in terms of human application, so it seems a good idea to make the distinction explicit.
What kind of explanation are you looking for, though? The best explanation of UDT I can currently give, without some sort of additional information about where you find it confusing or how it should be improved, is in my first post about it, Towards a New Decision Theory.
Ah, ok. Some people (such as Ilya Shpitser) do seem to be thinking mostly in terms of human application, so it seems a good idea to make the distinction explicit.