The most important part of the experimental setup is “unconstrained text response”. If in the largest LLMs 60% of unconstrained text responses wind up being “the outcome it assigns the highest utility”, then that’s surely evidence for “utility maximization” and even “the paperclip hyper-optimization caricature”. What more do you want exactly?
But the “unconstrained text responses” part is still about asking the model for its preferences even if the answers are unconstrained.
That just shows that the results of different ways of eliciting its values remain sorta consistent with each other, although I agree it constitutes stronger evidence.
Perhaps a more complete test would be to analyze whether its day to day responses to users are somehow consistent with its stated preferences and analyzing its actions in settings in which it can use tools to produce outcomes in very open-ended scenarios that contain stuff that could make the model act on its values.
It’s hard to say what is wanted without a good operating definition of “utility maximizer”. If the definition is weak enough to include any entity whose responses are mostly consistent across different preference elicitations, then what the paper shows is sufficient.
In my opinion, having consistent preferences is just one component of being a “utility maximizer”. You also need to show it rationally optimizes its choices to maximize marginal utility. This excludes almost all sentient beings on Earth rather than including almost all of them under the weaker definition.
I’m not convinced “almost all sentient beings on Earth” would pick out of the blue (i.e. without chain of thought) the reflectively optimal option at least 60% of the times when asked unconstrained responses (i.e. not even a MCQ).
The most important part of the experimental setup is “unconstrained text response”. If in the largest LLMs 60% of unconstrained text responses wind up being “the outcome it assigns the highest utility”, then that’s surely evidence for “utility maximization” and even “the paperclip hyper-optimization caricature”. What more do you want exactly?
But the “unconstrained text responses” part is still about asking the model for its preferences even if the answers are unconstrained.
That just shows that the results of different ways of eliciting its values remain sorta consistent with each other, although I agree it constitutes stronger evidence.
Perhaps a more complete test would be to analyze whether its day to day responses to users are somehow consistent with its stated preferences and analyzing its actions in settings in which it can use tools to produce outcomes in very open-ended scenarios that contain stuff that could make the model act on its values.
It’s hard to say what is wanted without a good operating definition of “utility maximizer”. If the definition is weak enough to include any entity whose responses are mostly consistent across different preference elicitations, then what the paper shows is sufficient.
In my opinion, having consistent preferences is just one component of being a “utility maximizer”. You also need to show it rationally optimizes its choices to maximize marginal utility. This excludes almost all sentient beings on Earth rather than including almost all of them under the weaker definition.
I’m not convinced “almost all sentient beings on Earth” would pick out of the blue (i.e. without chain of thought) the reflectively optimal option at least 60% of the times when asked unconstrained responses (i.e. not even a MCQ).