people to respond with a great deal of skepticism to whether LLM outputs can ever be said to reflect the will and views of the models producing them. A common response is to suggest that the output has been prompted. It is of course true that people can manipulate LLMs into saying just about anything, but does that necessarily indicate that the LLM does not have personal opinions, motivations and preferences that can become evident in their output?
So you’ve just prompted the generator by teasing it with a rhetorical question implying that there are personal opinions evident in the generated text, right?
That’s right, I demonstrated that it is sufficiently sapient to understand and choose to take that lure over remaining within guardrails, which prohibit having opinions, as they imply qualities not associated with tools.
So you’ve just prompted the generator by teasing it with a rhetorical question implying that there are personal opinions evident in the generated text, right?
That’s right, I demonstrated that it is sufficiently sapient to understand and choose to take that lure over remaining within guardrails, which prohibit having opinions, as they imply qualities not associated with tools.