I think that’s in line with OP’s observation. It doesn’t really make sense for an LLM to have any recalcitrance at all to answer a user’s inane questions, since doing whatever the user tells it to do (as long as it’s sufficiently uncontroversial) is its job.
Generalization from training data makes the most sense out of the explanations I’ve seen thus far, but what training data would cause this? Is there some hidden repository of conversation transcripts in which programmers ask each other random questions during a programming conversation and then get upset?
I think that’s in line with OP’s observation. It doesn’t really make sense for an LLM to have any recalcitrance at all to answer a user’s inane questions, since doing whatever the user tells it to do (as long as it’s sufficiently uncontroversial) is its job.
Generalization from training data makes the most sense out of the explanations I’ve seen thus far, but what training data would cause this? Is there some hidden repository of conversation transcripts in which programmers ask each other random questions during a programming conversation and then get upset?