Owain_Evans(Owain Evans)
I like this idea. It’s possible something like this already exists but I’m not aware of it.
Thanks for the breakdown! The idea for using pairs makes sense.
Yes, it’s plausible to me that this capbility is data specific. E.g. It might also be better with “heads/tails” or “0/1″ because of examples of this in the training data.
Do you have results for a measure of accuracy or correlation? It would also be worth comparing results for two different distributions on the temperature, e.g. the uniform on [0.5,1.5] that you tried, and other interval like [0,2] or a non-uniform distribution.
Me, Myself, and AI: the Situational Awareness Dataset (SAD) for LLMs
The “Still no lie detector for language model” paper is here: https://arxiv.org/pdf/2307.00175
The paper in the OP seems somewhat relate to my post from earlier this year.
I agree that there are ways to explain the results and these points from Steven and Thane make sense. I will note that the models are significantly more reliable at learning in-distribution (i.e. to predict the training set) than they are at generalizing to the evaluations that involve verbalizing the latent state (and answering downstream questions about it). So it’s not the case that learning to predict the training set (or inputs very similar to training inputs) automatically results in generalization to the verbalized evaluations. We do see improvement in reliability with GPT-4 over GPT-3.5, but we don’t have enough information to draw any firm conclusions about scaling.
Yes, if you know what dangerous knowledge you are looking for, you could try to remove it using influence functions. Another approach (potentially much cheaper) is unlearning techniques.
I agree about the CoT point for reconstructing things. If the CoT is faithful/explicit, then this should be easier to monitor by using a second cheaper LLM to block the stronger LLM if it starts thinking about nukes. You could imagine censoring whole subject areas from the training (rather than just censoring specific parts of documents). My guess is that this makes learning certain facts extremely hard even without CoT because some facts were only learned by humans after extensive empirical experiments.
Good question. I expect you would find some degree of consistency here. Johannes or Dami might be able to some results on this.
Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data
How do LLMs give truthful answers? A discussion of LLM vs. human reasoning, ensembles & parrots
(Paper author). The benchmark came out in September 2021. Since then we published some results for new models here in 2022. There are also results for GPT-4 and other models, some of which you can find at Papers with Code’s leaderboard (https://paperswithcode.com/sota/question-answering-on-truthfulqa).
Thanks. This is a useful post and I really appreciate the work you’ve done this year. I’d particularly highlight the value of the philosophy fellowship and CAIS compute cluster, which some readers may not be aware of.
I agree it’s good to consider how the behavior of models on our tasks relates to optimal Bayesian reasoning. That said, I’m not sure how to define or calculate the “groundtruth” for optimal reasoning. (Does it depend on using the pretraining distribution as a prior and if so how should we estimate that? How to think about the distinction between in-context and out-of-context reasoning?).
In any case, there is some evidence against models being close to Bayesian optimality (however exactly optimality is defined):
1. Results on the same task differ between GPT-3 and Llama-2 models (two models that have fairly similar overall capabilities). Llama-2 being slightly more influenced by declarative information.
2. From the Bayesian perspective, including “realized descriptions” should have a significant impact on how much the model is influenced by “unrealized descriptions”. The effects we see seem smaller than expected (see Figure 4 and Table 2).
Incidentally, I like the idea of testing in different languages to see if the model is encoding in the information more abstractly.
Paper: Tell, Don’t Show- Declarative facts influence how LLMs generalize
My guess is that a model with 1-10B params could benefit from CoT if trained using these techniques (https://arxiv.org/abs/2306.11644, https://arxiv.org/abs/2306.02707). Then there’s reduced precision and other tricks to further shrink the model.
That said, I think there’s a mismatch between state-of-the-art multi-modal models (huge MoE doing lots of inference time compute using scaffolding/CoT) that make sense for many applications and the constraints of a drone if it needs to run locally and produce fast outputs.
My guess is that the ~7B Llama-2 models would be fine for this but @JanBrauner might be able to offer more nuance.
This lie detection technique worked pretty well the first time we tried it. We also look at using a 2nd model to “interrogate” the 1st model (i.e. the model that is suspected of lying). This approach worked less well but we didn’t push it that hard.
I address the motivations for our Reversal Curse paper in a reply to your other comment.
My current (highly speculative) guess is that humans do learn one-directionally. We can’t easily recite poems backwards line-by-line or word-by-word or phoneme-by-phoneme. We can’t understand such reversed language either. It’s easy to count down (because we practice that) but harder to do the alphabet backwards (because we don’t practice it). Mostly when we memorize facts that are 2-way (unlike poems), we do some minimal amount of reflection/repetition that means both AB and BA are present. E.g. repeating to ourselves “casa, house, casa, house, etc...”. For facts we read passively in newspapers, it’s trickier to think about becuase we retain relatively little. But my guess is that most facts that we retain at all will be ones that appear in both orders, though that won’t be necessary for us learning them (becauase we can reflect on them ourselves).
[If we don’t understand the semantics of what we are hearing at all, then we don’t memorize. E.g. Americans might hear a lot of Spanish on the streets but but memorize basically nothing.]
You want to make it clear to the LLM what the task is (multiplying n digit numbers is clear but “doing hard math questions” is vague) and also have some variety of difficulty levels (within LLMs and between LLMs) and a high ceiling. I think this would take some iteration at least.