A Short Note on UDT

In my last post, I stum­bled across some ideas which I thought were origi­nal, but which were already con­tained in UDT. I sus­pect that was be­cause these ideas haven’t been given much em­pha­sis in any of the ar­ti­cles I’ve read about UDT, so I wanted to high­light them here.

We be­gin with some defi­ni­tions. Some in­puts in an In­put-Out­put map will be pos­si­ble for some agents to ex­pe­rience, but not for oth­ers. We will de­scribe such in­puts and the situ­a­tions they rep­re­sent as con­di­tion­ally con­sis­tent. Given a par­tic­u­lar agent, we will call a in­put/​situ­a­tion com­pat­i­ble if the agent is con­sis­tent with the cor­re­spond­ing situ­a­tion and in­com­pat­i­ble oth­er­wise. We will call agents con­sis­tent with a con­di­tion­ally con­sis­tent in­put/​situ­a­tion com­pat­i­ble and those who aren’t in­com­pat­i­ble.

We note the fol­low­ing points:

  • UDT uses an In­put-Out­put map in­stead of a Si­tu­a­tion-Out­put map. It is easy to miss how im­por­tant this choice is. Sup­pose we have an in­put rep­re­sent­ing a situ­a­tion that is con­di­tion­ally con­sis­tent. Try­ing to ask what an in­com­pat­i­ble agent does in such a situ­a­tion is prob­le­matic or at least difficult as the Prin­ci­ple of Ex­plo­sion means that all such situ­a­tions are equiv­a­lent. On the other hand, it is much eas­ier to ask how the agent re­sponds to a se­quences of in­puts rep­re­sent­ing an in­com­pat­i­ble situ­a­tion. The agent must re­spond some­how to such an in­put, even if it is by do­ing noth­ing or crash­ing. Si­tu­a­tions are also mod­el­led (via the Math­e­mat­i­cal In­tu­ition Func­tion), but the point is that UDT mod­els in­puts and situ­a­tions sep­a­rately.

  • Given the pre­vi­ous point, it is con­ve­nient to define an agent’s coun­ter­fac­tual ac­tion in an in­com­pat­i­ble situ­a­tion as its re­sponse to the in­put rep­re­sent­ing the situ­a­tion. For all com­pat­i­ble situ­a­tions, this pro­duces the same ac­tion as if we’d sim­ply asked what the agent would do in such a situ­a­tion. For con­di­tion­ally con­sis­tent situ­a­tions the agent is in­com­pat­i­ble with, it ex­plains the in­com­pat­i­bil­ity: any agent that would re­spond a cer­tain way to par­tic­u­lar in­puts won’t be put in such a situ­a­tion. (There might be con­di­tion­ally con­sis­tent situ­a­tions where com­pat­i­bil­ity isn’t de­pen­dent on re­sponses to in­puts, ie. only agents run­ning par­tic­u­lar source code are placed in a par­tic­u­lar po­si­tion, but UDT isn’t de­signed to op­ti­mise for this)

  • Similarly, UDT pre­dic­tors don’t ac­tu­ally pre­dict what an agent does in a situ­a­tion, but what an agent does when given an in­put rep­re­sent­ing a situ­a­tion. This is a broader con­cept that al­lows them to pre­dict be­havi­ours in in­con­sis­tent situ­a­tions. For a more for­mal ex­pla­na­tion of these ideas, see Log­i­cal Coun­ter­fac­tu­als for Perfect Pre­dic­tors.

No nominations.
No reviews.