I previously wrote a comment that seems relevant here:
How to translate identity-based decision making into values and/or beliefs seems non-trivial, and can perhaps be compared to the problem of translating anticipated-reward type decision making into preferences over states of the world or over math.
An agent that lets identity influence its decisions probably deviates from ideal rationality, but how to fix that? If we just excise the identity-based parts of its decision procedure without any compensation, that could easily make it worse off if for example it’s CEV depends on its identity.
I previously wrote a comment that seems relevant here:
An agent that lets identity influence its decisions probably deviates from ideal rationality, but how to fix that? If we just excise the identity-based parts of its decision procedure without any compensation, that could easily make it worse off if for example it’s CEV depends on its identity.