gives no guidelines on the design of decision-making algorithms.
I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent’s builders were thinking when they decided to use UDT.
If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.
The quote applies to humans, I use it as appropriately ported to more formal decision-making, where “anticipated experience” doesn’t generally make sense.
I am nowhere purporting to be giving guidelines for the design of a decision-making algorithm. As I said, I am not suggesting any alteration of the UDT formalism. I was also explicit in the OP that there is no problem understanding at an intuitive level what the agent’s builders were thinking when they decided to use UDT.
If all you care about is designing an agent that you can set loose to harvest utility for you, then my post is not meant to be interesting to you.
Beliefs should pay rent, not fly in the ether, unattached to what they are supposed to be about.
The whole Eliezer quote is that beliefs should “pay rent in future anticipations”. Beliefs about which once-possible world is actual do this.
The beliefs in question are yours, and anticipation is about agent’s design or behavior.
The quote applies to humans, I use it as appropriately ported to more formal decision-making, where “anticipated experience” doesn’t generally make sense.