How much should we take from the fact that LLMs’ choices across a range of scenarios can be organized into a consistent utility function? Non-human animals are often more rational than humans in the axiomatic sense. (The following paper has an interesting discussion of this and other related topics: https://globalprioritiesinstitute.org/wp-content/uploads/Adam-Bales-Will-AI-Avoid-Exploitation.pdf.)
How much should we take from the fact that LLMs’ choices across a range of scenarios can be organized into a consistent utility function? Non-human animals are often more rational than humans in the axiomatic sense. (The following paper has an interesting discussion of this and other related topics: https://globalprioritiesinstitute.org/wp-content/uploads/Adam-Bales-Will-AI-Avoid-Exploitation.pdf.)