Research Engineer at DeepMind, focused on mechanistic interpretability and large language models. Opinions are my own.
Tom Lieberum
I disagree with your intuition that we should not expect networks at irreducible loss to not be in superposition.
The reason I brought this up is that there are, IMO, strong first-principle reasons for why SPH should be correct. Say there are two features, which have an independent probability of 0.05 to be present in a given data point, then it would be wasteful to allocate a full neuron to each of these features. The probability of both features being present at the same time is a mere 0.00025. If the superposition is implemented well you get basically two features for the price of one with an error rate of 0.025%. So if there is even a slight pressure towards compression, e.g. by having less available neurons than features, then superposition should be favored by the network.
Now does this toy scenario map to reality? I think it does, and in some sense it is even more favorable to SPH since often the presence of features will be anti-correlated.
Interesting idea!
What do you think about the Superposition Hypothesis? If that were true, then at a sufficient sparsity of features in the input there is no basis in which the network is thinking in, meaning it will be impossible to find a rotation matrix that allows for a bijective mapping between neurons and features.
I would assume that the rotation matrix that enables local changes via the sparse Jacobian coincides with one which maximizes some notion of “neuron-feature-bijectiveness”. But as noted above that seems impossible if the SPH holds.
K-composition as a concept was introduced by Anthropic in their work on Transformer Circuits in the initial post. In general, the output of an attention head in an earlier layer can influence the query, key, or value computation of an attention head in a later layer.
K-composition refers to the case in which the key-computation is influenced. In a model without nonlinearities or layernorms you can do this simply by looking at how strongly the output matrix of head 1 and the key matrix of head 2 compose (or more precisely, by looking at the frobenius norm of the product relative to the product of the individual norms). I also tried to write a bit about it here.
Thanks for verifying! I retract my comment.
I think historically reinforcement has been used more in that particular constellation (see eg deep RL from HP paper) but as I noted I find reward learning more apt as it points to the hard thing being the reward learning, i.e. distilling human feedback into an objective, rather than the optimization of any given reward function (which technically need not involve reinforcement learning)
Well I thought about that but I wasn’t sure whether reinforcement learning from human feedback wouldn’t be just a strict subset of reward learning from human feedback. If reinforcement is indeed the strict definition then I concede but I dont think it makes sense.
Reward Learning from Human Feedback
Thanks for your reply! I think I basically agree with all of your points. I feel a lot of frustration around the fact that we don’t seem to have adequate infohazard policies to address this. It seems like a fundamental trade-off between security and openness/earnestness of discussion does exist though.
It could be the case that this community is not the correct place to enforce this rules, as there does still exist a substantial gap between “this thing could work” and “we have a working system”. This is doubly true in DL where implementation details matter a great deal.
I’d like to propose not talking publicly about ways to “fix” this issue. Insofar these results spell trouble for scaling up LLMs, this is a good thing!
Infohazard (meta-)discussions are thorny by their very nature and I don’t want to discourage discussions around these results in general, e.g. how to interpret them or whether the analysis has merits.
If the subset of interpretable models is also “nice” in the differential-geometric sense (say, also a smooth submanifold of ), then the intersection is also similarly “nice.”
Do you have any intuition for why we should expect to be “nice”? I’m not super familiar with differential geometry but I don’t really see why this should be the case..
This assumes a fixed scaling law. One possible way of improving oneself could be to design a better architecture with a better scaling exponent.
Thanks for elaborating! In so far your assessment is based on in-person interactions, I can’t really comment since I haven’t spoken much with people from Anthropic.
I think there are degrees to believing this meme you refer to, in the sense of “we need an AI of capability level X to learn meaningful things”. And I would guess that many people at Anthropic do believe this weaker version—it’s their stated purpose after all. And for some values of X this statement is clearly true, e.g. learned filters by shallow CNNs trained on MNIST are not interpretable, wheras the filters of deep Inception-style CNNs trained on ImageNet are (mostly) interpretable.
One could argue that parts of interpretabillity do need to happen in a serial manner, e.g. finding out the best way to interpret transformers at all, the recent SoLU finding, or just generally building up knowledge on how to best formalize or go about this whole interpretability business. If that is true, and furthermore interpretability turns out to be an important component in promising alignment proposals, then the question is mostly about what level of X gives you the most information to advance the serial interpretability research in terms of how much other serial budget you burn.
I don’t know whether people at Anthropic believe the above steps or have thought about it in these ways at all but if they did this could possibly explain the difference in policies between you and them?
I’d also be interested in hearing which parts of Anthropic’s research output you think burns our serial time budget. If I understood the post correctly, then OP thinks that efforts like transformer circuits are mostly about accelerating parallelizable research.
Maybe OP thinks that
mechanistic interpretability does have little value in terms of serial research
RLHF does not give us alignment (because it doesn’t generalize beyond the “sharp left turn” which OP thinks is likely to happen)
therefore, since most of Anthropic’s alignment focused output has not much value in terms of serial research, and it does somewhat enhance present-day LLM capabilities/usability, it is net negative?
But I’m very much unsure whether OP really believes this—would love to hear him elaborate.
ETA: It could also be the case that OP was exclusively referring to the part of Anthropic that is about training LLMs efficiently as a pre-requisite to study those models?
Yep all good points. I think I didn’t emphasize enough that you should not take it every day (maybe not even every other day).
The gums are less addictive than cigs because they taste bad and because the feedback/reinforcement is slower. Lozenges sound like a good alternative too, to be extra sure.
I wouldn’t recommend regular caffeine at all unless you know from experience that you won’t develop a physical dependency. In my experience you get more like short term gain until your body adapts then requires coffee to function normally.
If you do want to try caffeine I recommend trying to pair it with L-theanine (either in pills or green tea) which is supposed to smooth the experience and makes for a cleaner high (YMMV).
If you’re looking for a stimulant that you don’t take regularly and with shorter half life, consider nicotine gums. Again ymmv, I think gwern has tried it with little effect. Beware the addictive potential (although lower than with cigarettes or vapes)
On priors, I wouldn’t worry too much about c), since I would expect a ‘super stimulus’ for head A to not be a super stimulus for head B.
I think one of the problems is the discrete input space, i.e. how do you parameterize sequence that is being optimized?
One idea I just had was trying to fine-tune an LLM with a reward signal given by for example the magnitude of the residual delta coming from a particular head (we probably something else here, maybe net logit change?). The LLM then already encodes a prior over “sensible” sequences and will try to find one of those which activates the head strongly (however we want to operationalize that).
Very cool to see new people joining the interpretability field!
Some resource suggestions:
If you didn’t know already, there is a TF2 port of Lucid, called Luna:
There is also Lucent, which is Lucid for PyTorch: (Some docs written by me for a slightly different version)
For transformer interpretability you might want to check out Anthropic’s work on transformer circuits, Redwood Research’s interpretability tool, or (shameless plug) Unseal.
I can’t speak to the option for remote work but as a counterpoint, it seems very straightforward to get a UK visa for you and your spouse/children (at least straightforward relative to the US). The relevant visa to google is the Skilled Worker / Tier 2 visa if you want to know more.
ETA: Of course, there are still legitimate reasons for not wanting to move. Just wanted to point out that the legal barrier is lower than you might think.
There is definitely something out there, just can’t recall the name. A keyword you might want to look for is “disentangled representations”.
One start would be the beta-VAE paper https://openreview.net/forum?id=Sy2fzU9gl
I don’t think there is anything on that front other than the paragraphs in the SoLU paper. I alluded to a possible experiment for this on Twitter in response to that paper but haven’t had the time to try it out myself: You could take a tiny autoencoder to reconstruct some artificially generated data where you vary attributes such as sparsity, ratio of input dimensions vs. bottleneck dimensions, etc. You could then look at the weight matrices of the autoencoder to figure out how it’s embedding the features in the bottleneck and which settings lead to superposition, if any.