Even more than an explanation, I would appreciate an explanation on the LessWrong Wiki because there currently isn’t one! I’ve just reread through the LW posts I could find about UDT and I guess I should let them stew for awhile. I might also ask people at the current MIRI workshop for their thoughts in person.
Another thing to keep in mind is that UDT is currently formulated mainly for AI rather than human use (whereas you seem to be thinking mostly in human terms).
Only as an intuition pump; when it’s time to get down to brass tacks I’m much happier to talk about a well-specified program than a poorly-specified human.
I wrote a brief mathematical write-up of “bare bones” UDT1 and UDT1.1. The write-up describes the version that Wei Dai gave in his original posts. The write-up doesn’t get into more advanced versions that invoke proof-length limits, try to “play chicken with the universe”, or otherwise develop how the “mathematical intuition module” is supposed to work.
Without trying to make too much of the analogy, I think that I would describe TDT as “non-naive” CDT, and UDT as “non-naive” EDT.
This is not much of an exaggeration. Still, UDT basically solves many toy problems where we get to declare what the output of the MIM is (“Omega tells you that …”).
Even more than an explanation, I would appreciate an explanation on the LessWrong Wiki because there currently isn’t one!
What kind of explanation are you looking for, though? The best explanation of UDT I can currently give, without some sort of additional information about where you find it confusing or how it should be improved, is in my first post about it, Towards a New Decision Theory.
Only as an intuition pump; when it’s time to get down to brass tacks I’m much happier to talk about a well-specified program than a poorly-specified human.
Ah, ok. Some people (such as Ilya Shpitser) do seem to be thinking mostly in terms of human application, so it seems a good idea to make the distinction explicit.
Even more than an explanation, I would appreciate an explanation on the LessWrong Wiki because there currently isn’t one! I’ve just reread through the LW posts I could find about UDT and I guess I should let them stew for awhile. I might also ask people at the current MIRI workshop for their thoughts in person.
Only as an intuition pump; when it’s time to get down to brass tacks I’m much happier to talk about a well-specified program than a poorly-specified human.
I wrote a brief mathematical write-up of “bare bones” UDT1 and UDT1.1. The write-up describes the version that Wei Dai gave in his original posts. The write-up doesn’t get into more advanced versions that invoke proof-length limits, try to “play chicken with the universe”, or otherwise develop how the “mathematical intuition module” is supposed to work.
Without trying to make too much of the analogy, I think that I would describe TDT as “non-naive” CDT, and UDT as “non-naive” EDT.
In this writeup it really seems like all of the content is in how the mathematical intuition module works.
This is not much of an exaggeration. Still, UDT basically solves many toy problems where we get to declare what the output of the MIM is (“Omega tells you that …”).
What kind of explanation are you looking for, though? The best explanation of UDT I can currently give, without some sort of additional information about where you find it confusing or how it should be improved, is in my first post about it, Towards a New Decision Theory.
Ah, ok. Some people (such as Ilya Shpitser) do seem to be thinking mostly in terms of human application, so it seems a good idea to make the distinction explicit.