Karma: 1,561

# In­ves­ti­gat­ing the Abil­ity of LLMs to Rec­og­nize Their Own Writing

30 Jul 2024 15:41 UTC
20 points
• It’s interesting how llama 2 is the most linear—it’s keeping track of a wider range of lengths. Whereas gpt4 immediately transitions from long to short around 5-8 characters because I guess humans will consider any word above ~8 characters “long.”

• Have you tried this procedure starting with a steering vector found using a supervised method?

It could be that there are only a few “true” feature directions (like what you would find with a supervised method), and the melbo vectors are vectors that happen to have a component in the “true direction”. As long as none of the vectors in the basket of stuff you are staying orthogonal to are the exact true vector(s), you can find different orthogonal vectors that all have some sufficient amount of the actual feature you want.

This would predict:

• Summing/​averaging your vectors produces a reasonable steering vector for the behavior (provided rescaling to an effective norm)

• Starting with a supervised steering vector enables you to generate fewer orthogonal vectors with same effect

• (Maybe) The sum of your successful melbo vectors is similar to the supervised steering vector (eg. mean difference in activations on code/​prose contrast pairs)

# Jailbreak steer­ing generalization

20 Jun 2024 17:25 UTC
38 points
(arxiv.org)

Contra Chollet, I think that current LLMs are well described as doing at least some useful learning when doing in-context learning.

I agree that Chollet appears to imply that in-context learning doesn’t count as learning when he states:

Most of the time when you’re using an LLM, it’s just doing static inference. The model is frozen. You’re just prompting it and getting an answer. The model is not actually learning anything on the fly. Its state is not adapting to the task at hand.

(This seems misguided as we have evidence of models tracking and updating state in activation space.)

However later on in the Dwarkesh interview, he says:

Discrete program search is very deep recombination with a very small set of primitive programs. The LLM approach is the same but on the complete opposite end of that spectrum. You scale up the memorization by a massive factor and you’re doing very shallow search. They are the same thing, just different ends of the spectrum.

My steelman of Chollet’s position is that he thinks the depth of search you can perform via ICL in current LLMs is too shallow, which means they rely much more on learned mechanisms that require comparatively less runtime search/​computation but inherently limit generalization.

I think the directional claim “you can easily overestimate LLMs’ generalization abilities by observing their performance on common tasks” is correct—LLMs are able to learn very many shallow heuristics and memorize much more information than humans, which allows them to get away with doing less in-context learning. However, it is also true that this may not limit their ability to automate many tasks, especially with the correct scaffolding, or stop them from being dangerous in various ways.

• Husák is walking around Prague, picking up rocks and collecting them in his pockets while making strange beeping sounds. His assistant gets worried about his mental health. He calls Moscow and explains the situation. Brezhnev says: “Oh shit! We must have mixed up the channel to lunokhod again!”

very funny

# Soviet com­edy film recommendations

9 Jun 2024 23:40 UTC
42 points
(open.substack.com)
• The direction extracted using the same method will vary per layer, but this doesn’t mean that the correct feature direction varies that much, but rather that it cannot be extracted using a linear function of the activations at too early/​late layers.

• 28 Apr 2024 11:21 UTC
LW: 17 AF: 10
20
AF
in reply to: Dan H’s comment

We do weight editing in the RepE paper (that’s why it’s called RepE instead of ActE)

I looked at the paper again and couldn’t find anywhere where you do the type of weight-editing this post describes (extracting a representation and then changing the weights without optimization such that they cannot write to that direction).

The LoRRA approach mentioned in RepE finetunes the model to change representations which is different.

• 28 Apr 2024 7:25 UTC
LW: 17 AF: 12
9
AF
in reply to: Dan H’s comment

I agree you investigate a bunch of the stuff I mentioned generally somewhere in the paper, but did you do this for refusal-removal in particular? I spent some time on this problem before and noticed that full refusal ablation is hard unless you get the technique/​vector right, even though it’s easy to reduce refusal or add in a bunch of extra refusal. That’s why investigating all the technique parameters in the context of refusal in particular is valuable.

• 27 Apr 2024 23:05 UTC
LW: 21 AF: 11
12
AF
in reply to: Dan H’s comment

FWIW I published this Alignment Forum post on activation steering to bypass refusal (albeit an early variant that reduces coherence too much to be useful) which from what I can tell is the earliest work on linear residual-stream perturbations to modulate refusal in RLHF LLMs.

I think this post is novel compared to both my work and RepE because they:

• Demonstrate full ablation of the refusal behavior with much less effect on coherence /​ other capabilities compared to normal steering

• Investigate projection thoroughly as an alternative to sweeping over vector magnitudes (rather than just stating that this is possible)

• Find that using harmful/​harmless instructions (rather than harmful vs. harmless/​refusal responses) to generate a contrast vector is the most effective (whereas other works try one or the other), and also investigate which token position at which to extract the representation

• Find that projecting away the (same, linear) feature at all layers improves upon steering at a single layer, which is different from standard activation steering

• Test on many different models

• Describe a way of turning this into a weight-edit

Edit:

(Want to flag that I strong-disagree-voted with your comment, and am not in the research group—it is not them “dogpiling”)

I do agree that RepE should be included in a “related work” section of a paper but generally people should be free to post research updates on LW/​AF that don’t have a complete thorough lit review /​ related work section. There are really very many activation-steering-esque papers/​blogposts now, including refusal-bypassing-related ones, that all came out around the same time.

• I am contrasting generating an output by:

1. Modeling how a human would respond (“human modeling in output generation”)

2. Modeling what the ground-truth answer is

Eg. for common misconceptions, maybe most humans would hold a certain misconception (like that South America is west of Florida), but we want the LLM to realize that we want it to actually say how things are (given it likely does represent this fact somewhere)

• I expect if you average over more contrast pairs, like in CAA (https://​​arxiv.org/​​abs/​​2312.06681), more of the spurious features in steering vectors are cancelled out leading to higher quality vectors and greater sparsity in the dictionary feature domain. Did you find this?

• This is really cool work!!

In other experiments we’ve run (not presented here), the MSP is not well-represented in the final layer but is instead spread out amongst earlier layers. We think this occurs because in general there are groups of belief states that are degenerate in the sense that they have the same next-token distribution. In that case, the formalism presented in this post says that even though the distinction between those states must be represented in the transformers internal, the transformer is able to lose those distinctions for the purpose of predicting the next token (in the local sense), which occurs most directly right before the unembedding.

Would be interested to see analyses where you show how an MSP is spread out amongst earlier layers.

Presumably, if the model does not discard intermediate results, something like concatenating residual stream vectors from different layers and then linearly correlating with the ground truth belief-state-over-HMM-states vector extracts the same kind of structure you see when looking at the final layer. Maybe even with the same model you analyze, the structure will be crisper if you project the full concatenated-over-layers resid stream, if there is noise in the final layer and the same features are represented more cleanly in earlier layers?

In cases where redundant information is discarded at some point, this is a harder problem of course.

• Profound!

• It’s possible that once my iron reserves were replenished through supplementation, the amount of iron needed to maintain adequate levels was lower, allowing me to maintain my iron status through diet alone. Iron is stored in the body in various forms, primarily in ferritin, and when levels are low, the body draws upon these reserves.

I’ll never know for sure, but the initial depletion of my iron reserves could have been due to a chest infection I had around that time (as infections can lead to decreased iron absorption and increased iron loss) or a period of unusually poor diet.

Once my iron reserves were replenished, my regular diet seemed to be sufficient to prevent a recurrence of iron deficiency, as the daily iron requirement for maintenance is lower than the amount needed to correct a deficiency.