“choose your action so that the fact of your choosing it in your current situation logically implies the highest expected utility (weighted over all apriori possible worlds before you learned your current situation) compared to all other actions you could take in your current situation”.
That sounds awkward. Would you say it’s equivalent to “choose globally winning strategies, not just locally winning actions?”
As far as I know, you understand UDT and can answer that question yourself :-) But to me your formulation sounds a little vague. If a newbie tries to use it to solve Counterfactual Mugging, I think he/she may get confused about the intended meaning of “global”.
And yeah, “globally winning” probably should have been replaced with “optimal,” since the “local” means something specific about payoff matrices and I don’t want to imply the corresponding “global.”
That sounds awkward. Would you say it’s equivalent to “choose globally winning strategies, not just locally winning actions?”
As far as I know, you understand UDT and can answer that question yourself :-) But to me your formulation sounds a little vague. If a newbie tries to use it to solve Counterfactual Mugging, I think he/she may get confused about the intended meaning of “global”.
I still don’t know if I understand UDT :D
And yeah, “globally winning” probably should have been replaced with “optimal,” since the “local” means something specific about payoff matrices and I don’t want to imply the corresponding “global.”