It suggests that your entire utility derives from your personal suffering from the temperature, so these suffering entities cannot be empathetic humans.
I think this particular assumption (axiom?) isn’t required. The agents’ utilities can be linear with the temperature for any reason. If an agent is concerned only with their own suffering then U=-T. If an agent is equally concerned with the suffering of all agents then U=-100T. The same set of strategies are Nash equilibria in both cases.
Each robot is supposed to assume that, if it changes its strategy, none of the others does the same.
This seems right.
Also, humans aren’t rational.
Also, humans don’t live forever.
I think this particular assumption (axiom?) isn’t required. The agents’ utilities can be linear with the temperature for any reason. If an agent is concerned only with their own suffering then U=-T. If an agent is equally concerned with the suffering of all agents then U=-100T. The same set of strategies are Nash equilibria in both cases.
This seems right. Also, humans aren’t rational. Also, humans don’t live forever.