What I was trying to get at is not the choice of A or B, but the choice to use TDT.
The same as any other choice: Assuming that you are calculating whether you prefer to adopt TDT and preferring to adopt TDT results in you adopting TDT you have the choice to adopt TDT.
If you’re a human, why would you adopt TDT? The reason must be one that is not an answer to “why would you cooperate?”, nor to “why would you tell people you have adopted TDT?”
You might be able to prove that utility functions and decision theories are equivalent;
I certainly don’t believe that. (I’m making the simplifying assumption of consequentialism rather than some other value system tortured into being represented as an utility function.) The utility function is what assigns utilities to various possible states of the world (in the widest possible sense), the decision theories differ in how they link the possible choices to the possible states of the world, not in the utilities of those states.
An agent chooses to change decision theories if their preference, calculated according to their current utility function and decision theory, is to change their decision theory and this results in them changing their decision theory. I’m not sure in how far that applies to humans. For them it may be more like realizing that TDT is a closer approximation of how their decision making process actually functions given the correct input, and that in as far as their decision making was previously approximated by another decision theory it was distorted by an oversimplified understanding of the world.
The same as any other choice: Assuming that you are calculating whether you prefer to adopt TDT and preferring to adopt TDT results in you adopting TDT you have the choice to adopt TDT.
I certainly don’t believe that. (I’m making the simplifying assumption of consequentialism rather than some other value system tortured into being represented as an utility function.) The utility function is what assigns utilities to various possible states of the world (in the widest possible sense), the decision theories differ in how they link the possible choices to the possible states of the world, not in the utilities of those states.
An agent chooses to change decision theories if their preference, calculated according to their current utility function and decision theory, is to change their decision theory and this results in them changing their decision theory. I’m not sure in how far that applies to humans. For them it may be more like realizing that TDT is a closer approximation of how their decision making process actually functions given the correct input, and that in as far as their decision making was previously approximated by another decision theory it was distorted by an oversimplified understanding of the world.