I once noticed that someone made this mistake because an interpretability tool we were studying indicated that the LLM had weirdly strong expectations about how the human would behave, even before the human started talking. Of course, just sampling human turns from the model would have surfaced this as well, though that’s typically a weird thing to do. Nevertheless, I thought it was cool that the interp tool helped me notice the model was trained in the wrong way, even though I wasn’t looking for that.
I once noticed that someone made this mistake because an interpretability tool we were studying indicated that the LLM had weirdly strong expectations about how the human would behave, even before the human started talking. Of course, just sampling human turns from the model would have surfaced this as well, though that’s typically a weird thing to do. Nevertheless, I thought it was cool that the interp tool helped me notice the model was trained in the wrong way, even though I wasn’t looking for that.