Can somebody ELI5 how much I should update on the recent SAE = dead salmon news?
On priors I would expect the SAE bear news to be overblown. 50% of mechinterp is SAEs—a priori, it seems unlikely to me that so many talented people went astray. But I’m an outsider and curious about alternate views.
I have not updated on these results much so far. Though I haven’t looked at them in detail yet. My guess is that if you already had a view of SAE-style interpretability somewhat similar to mine [1,2], these papers shouldn’t be much of an additional update for you.
(About half a year ago I had a thought along the lines of “gosh, it would be good for interp research if people doing interp were at least somewhat familiar with philosophy of mind … not that it would necessarily teach them anything object-level useful for the kind of research they’re doing but at least it would show them which chains of thought are blind alleys because they seem to be repeating some of the same mistakes as 20th century philosophers” (I don’t remember what mistakes exactly but I think something to do with representations). Well, perhaps not just philosophy of mind.)
always good to get skeptical takes on SAEs, though imo this result is because of problems with SAE evaluation methodology. I would strongly bet that well trained SAE features on random nets are qualitatively much worse than ones on real LMs.
To assess the effect of dataset correlations on the interpretability of feature activations, we run dictionary learning on a version of our one-layer model with random weights. 28 The resulting features are here, and contain many single-token features (such as “span”, “file”, ”.”, and “nature”) and some other features firing on seemingly arbitrary subsets of different broadly recognizable contexts (such as LaTeX or code). However, we are unable to construct interpretations for the non-single-token features that make much sense and invite the reader to examine feature visualizations from the model with randomized weights to confirm this for themselves. We conclude that the learning process for the model creates a richer structure in its activations than the distribution of tokens in the dataset alone.
In my first SAE feature post, I show a clearly positional feature:
which is not a feature you’ll find in a SAE trained on a randomly intitialized transformer.
The reason the auto-interp metric is similar is likely due to the fact that SAEs on random weights still have single-token features (ie activate on one token). Single-token features are the easiest feature to auto-interp since the hypothesis is “activates on this token” which is easy to predict for an LLM.
When you look at their appendix at their sampled features for the random features, all three are single token features.
However, I do want to clarify that their paper is still novel (they did random weights and controls over many layers in Pythia 410M) and did many other experiments in their paper: it’s a valid contribution to the field, imo.
Also to clarify that SAEs aren’t perfect, and there’s a recent paper on it (which I don’t think captures all the problems), and I’m really glad Apollo’s diversified away from SAE’s by pursuing their weight-based interp approach (which I think is currently underrated karma-wise by 3x).
Specifically re: “SAEs can interpret random transformers”
Based on reading replies from Adam Karvonen, Sam Marks, and other interp people on Twitter: the results are valid, but can be partially explained by the auto-interp pipeline used. See his reply here: https://x.com/a_karvonen/status/1886209658026676560?s=46
I haven’t thought deeply about this specific case, but I think you should consider this like any other ablation study—like, what happens if you replace the SAE with a linear probe?
Can somebody ELI5 how much I should update on the recent SAE = dead salmon news?
On priors I would expect the SAE bear news to be overblown. 50% of mechinterp is SAEs—a priori, it seems unlikely to me that so many talented people went astray. But I’m an outsider and curious about alternate views.
I have not updated on these results much so far. Though I haven’t looked at them in detail yet. My guess is that if you already had a view of SAE-style interpretability somewhat similar to mine [1,2], these papers shouldn’t be much of an additional update for you.
(Context: https://x.com/davidad/status/1885812088880148905 , i.e. some papers just got published that strongly question whether SAEs learn anything meaningful, just like the dead salmon study questioned the value of much of fMRI research.)
(About half a year ago I had a thought along the lines of “gosh, it would be good for interp research if people doing interp were at least somewhat familiar with philosophy of mind … not that it would necessarily teach them anything object-level useful for the kind of research they’re doing but at least it would show them which chains of thought are blind alleys because they seem to be repeating some of the same mistakes as 20th century philosophers” (I don’t remember what mistakes exactly but I think something to do with representations). Well, perhaps not just philosophy of mind.)
I agree with Leo Gao here:
https://x.com/nabla_theta/status/1885846403785912769
Well, maybe we did go astray, but it’s not for any reasons mentioned in this paper!
SAEs were trained on random weights since Anthropic’s first SAE paper in 2023:
In my first SAE feature post, I show a clearly positional feature:
which is not a feature you’ll find in a SAE trained on a randomly intitialized transformer.
The reason the auto-interp metric is similar is likely due to the fact that SAEs on random weights still have single-token features (ie activate on one token). Single-token features are the easiest feature to auto-interp since the hypothesis is “activates on this token” which is easy to predict for an LLM.
When you look at their appendix at their sampled features for the random features, all three are single token features.
However, I do want to clarify that their paper is still novel (they did random weights and controls over many layers in Pythia 410M) and did many other experiments in their paper: it’s a valid contribution to the field, imo.
Also to clarify that SAEs aren’t perfect, and there’s a recent paper on it (which I don’t think captures all the problems), and I’m really glad Apollo’s diversified away from SAE’s by pursuing their weight-based interp approach (which I think is currently underrated karma-wise by 3x).
Specifically re: “SAEs can interpret random transformers”
Based on reading replies from Adam Karvonen, Sam Marks, and other interp people on Twitter: the results are valid, but can be partially explained by the auto-interp pipeline used. See his reply here: https://x.com/a_karvonen/status/1886209658026676560?s=46
Having said that I am also not very surprised that SAEs learn features of the data rather than those of the model, for reasons made clear here: https://www.lesswrong.com/posts/gYfpPbww3wQRaxAFD/activation-space-interpretability-may-be-doomed
I haven’t thought deeply about this specific case, but I think you should consider this like any other ablation study—like, what happens if you replace the SAE with a linear probe?