Thanks. This helped me realize/recall that when an LLM appears to be nice, much less follows from that than it would for a human. For example, a password-locked model could appear nice, but become very nasty if it reads a magic word. So my mental model for “this LLM appears nice” should be closer to “this chimpanzee appears nice” or “this alien appears nice” or “this religion appears nice” in terms of trust. Interpretability and other research can help, but then we’re moving further from human-based intuitions.
Thanks. This helped me realize/recall that when an LLM appears to be nice, much less follows from that than it would for a human. For example, a password-locked model could appear nice, but become very nasty if it reads a magic word. So my mental model for “this LLM appears nice” should be closer to “this chimpanzee appears nice” or “this alien appears nice” or “this religion appears nice” in terms of trust. Interpretability and other research can help, but then we’re moving further from human-based intuitions.