This is not the correct level for thinking about decision theory—we don’t think about any of our decisions that way. Decision theory is about determining the output of the specific choice-making procedure “consider all available options and pick the best one in the moment”.
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
I don’t think this is incompatible with making the best decision in the moment. You just decide in the moment to go with the more sophisticated version of the categorical imperative, because that seems best?
If I didn’t reason like this, I would not vote and I would have a harder time to stick to commitments.
I agree thinking about decisions in a way that is not purely greedy is complicated.
Categorical imperative has been popular for a long while
I think Rationalists have stumbled into reasonable beliefs about good strategies for iterated games/situations where reputation matters and people learn about your actions. But you don’t need exotic decision theories for that.
I address this in the post:
...makes sense under two conditions:
Their cooperative actions directly cause desirable outcomes by making observers think they are trustworthy/cooperative.
Being deceptive is too costly, either because it’s literally difficult (requires too much planning/thought), or because it makes future deception impossible (e.g. because of reputation and repeated interactions).
Of course, whether or not we have some free will, we are not entirely free—some actions are outside of our capability. Being sufficiently good at deception may be one of these. Hence why one might rationally decide to always be honest and cooperative—successfully only pretending to be so when others are watching might be literally impossible (and messing up once might be very costly).
How does your purely causal framing escape backward induction? Pure CDT agents defect in the iterated version of the prisoners’ dilemma, too. Since at the last time step you wouldn’t care about your reputation.
In conclusion, if you find yourself freely choosing between options, it’s rational to take a dominating strategy, like two-boxing in Newcomb’s problem, or defecting in the sorted prisoner’s dilemma. However, given the opportunity to actually pre-commit to decisions that get you better outcomes provided your pre-commitment, you should do so.
How do you tell if you are in a “pre-commitment” or in a defecting situation?
Categorical imperative has been popular for a long while:
I don’t think this is incompatible with making the best decision in the moment. You just decide in the moment to go with the more sophisticated version of the categorical imperative, because that seems best? If I didn’t reason like this, I would not vote and I would have a harder time to stick to commitments. I agree thinking about decisions in a way that is not purely greedy is complicated.
I think Rationalists have stumbled into reasonable beliefs about good strategies for iterated games/situations where reputation matters and people learn about your actions. But you don’t need exotic decision theories for that.
I address this in the post:
How does your purely causal framing escape backward induction? Pure CDT agents defect in the iterated version of the prisoners’ dilemma, too. Since at the last time step you wouldn’t care about your reputation.
How do you tell if you are in a “pre-commitment” or in a defecting situation?