For what it’s worth, my current view on SAEs is that they remain a pretty neat unsupervised technique for making (partial) sense of activations, but they fit more into the general category of unsupervised learning techniques, e.g. clustering algorithms, than as a method that’s going to discover the “true representational directions” used by the language model. And, as such, they share many of the pros and cons of unsupervised techniques in general:[1]
(Pros) They may be useful / efficient for getting a first pass understanding of what’s going in a model / with some data (indeed many of their success stories have this flavour).
(Cons) They are hit and miss—often not carving up the data in the way you’d prefer, with weird omissions or gerrymandered boundaries you need to manually correct for. Once you have a hypothesis, a supervised method will likely give you better results.
I think this means SAEs could still be useful for generating hypotheses when trying to understand model behaviour, and and I really like the CLTs papers in this regard.[2] However, it’s still unclear whether they are better for hypothesis generation than alternative techniques, particularly techniques that have other advantages, like the ability to be used with limited model access (i.e. black-box techniques) or techniques that don’t require paying a large up-front cost before they can be used on a model.
I largely agree with your updates 1 and 2 above, although on 2 I still think it’s plausible that while many “why is the model doing X?” type questions can be answered with black-box techniques today, this may not continue to hold into the future, which is why I still view interp as a worthwhile research direction. This does make it important though to always try strong baselines on any new project and only get excited when interp sheds light on problems that genuinely seem hard to solve using these baselines.[3]
When I say unsupervised learning, I’m using this term in its conventional sense, e.g. clustering algorithms, manifold learning, etc; not in the sense of tasks like language model pre-training which I sometimes see referred to as unsupervised.
Particularly its emphasis on techniques to prune massive attribution graphs, improving tooling for making sense of the results, and accepting that some manual adjustment of the decompositions produced by CLTs may be necessary because we’re giving up on the idea that CLTs / SAEs are uncovering a “true basis”.
And it does seem that black box methods often suffice (in the sense of giving “good enough explanations” for whatever we need these explanations for) when we try to do this. Though this could just be—as you say—because of bad judgement. I’d definitely appreciate suggestions for better downstream tasks we should try!
SAEs [...] remain a pretty neat unsupervised technique for making (partial) sense of activations, but they fit more into the general category of unsupervised learning techniques, e.g. clustering algorithms, than as a method that’s going to discover the “true representational directions” used by the language model.
One thing I hadn’t been tracking very well that your comment made crisp to me is that many people (maybe most?) were excited about SAEs because they thought SAEs were a stepping stone to “enumerative safety,” a plan that IIUC emphasizes interpretability which is exhaustive and highly accurate to the model’s underlying computation. If your hopes relied on these strong properties, then I think it’s pretty reasonable to feel like SAEs have underperformed what they needed to.
Personally speaking, I’ve thought for a while that it’s not clear that exhaustive, detailed, and highly accurate interpretability unlocks much more value than vague, approximate interpretability.[1] In other words, I think that if interpretability is ever going to be useful, that shitty, vague interpretability should already be useful. Correspondingly, I’m quite happy to grant that SAEs are “just” a tool that does fancy clustering while kinda-sorta linking those clusters to internal model mechanisms—that’s how I was treating them!
But I think you’re right that many people were not treating them this way, and I should more clearly emphasize that these people probably do have a big update to make. Good point.
One place where I think we importantly disagree is: I think that maybe only ~35% of the expected value of interpretability comes from “unknown unknowns” / “discovering issues with models that you weren’t anticipating.” (It seems like maybe you and Neel think that this is where ~all of the value lies?)
Rather, I think that most of the value lies in something more like “enabling oversight of cognition, despite not having data that isolates that cognition.” In more detail, I think that some settings have structural properties that make it very difficult to use data to isolate undesired aspects of model cognition. A prosaic example is spurious correlations, assuming that there’s something structural stopping you from just collecting more data that disambiguates the spurious cue from the intended one. Another example: It might be difficult to disambiguate the “tell the human what they think is the correct answer” mechanism from the “tell the human what I think is the correct answer” mechanism. I write about this sort of problem, and why I think interpretability might be able to address it, here. And AFAICT, I think it really is quite different—and more plausibly interp-advantaged—than “unknown unknowns”-type problems.
To illustrate the difference concretely, consider the Bias in Bios task that we applied SHIFT to in Sparse Feature Circuits. Here, IMO the main impressive thing is not that interpretability is useful for discovering a spurious correlation. (I’m not sure that it is.) Rather, it’s that—once the spurious correlation is known—you can use interp to remove it even if you do not have access to labeled data isolating the gender concept.[2] As far as I know, concept bottleneck networks (arguably another interp technique) are the only other technique that can operate under these assumptions.
Just to establish the historical claim about my beliefs here:
Here I described the idea that turned into SHIFT as “us[ing] vague understanding to guess which model components attend to features which are spuriously correlated with the thing you want, then use the rest of the model as an improved classifier for the thing you want”.
After Sparse Feature Circuits came out, I wrote in private communications to Neel “a key move I did when picking this project was ‘trying to figure out what cool applications were possible even with small amounts of mechanistic insight.’ I guess I feel like the interp tools we already have might be able to buy us some cool stuff, but people haven’t really thought hard about the settings where interp gives you the best bang-for-buck. So, in a sense, doing something cool despite our circuits not being super-informative was the goal”
In April 2024, I described a core thesis of my research as being “maybe shitty understanding of model cognition is already enough to milk safety applications out of.”
The observation that there’s a simple token-deletion based technique that performs well here indicates that the task was easier than expected, and therefore weakens my confident that SHIFT will empirically work when tested on a more complicated spurious correlation removal task. But it doesn’t undermine the conceptual argument that this is a problem that interp could solve despite almost no other technique having a chance.
Rather, I think that most of the value lies in something more like “enabling oversight of cognition, despite not having data that isolates that cognition.”
Is this a problem you expect to arise in practice? I don’t really expect it to arise, if you’re allowing for a significant amount of effort in creating that data (since I assume you’d also be putting a significant amount of effort into interpretability).
Thanks for copying this over!
For what it’s worth, my current view on SAEs is that they remain a pretty neat unsupervised technique for making (partial) sense of activations, but they fit more into the general category of unsupervised learning techniques, e.g. clustering algorithms, than as a method that’s going to discover the “true representational directions” used by the language model. And, as such, they share many of the pros and cons of unsupervised techniques in general:[1]
(Pros) They may be useful / efficient for getting a first pass understanding of what’s going in a model / with some data (indeed many of their success stories have this flavour).
(Cons) They are hit and miss—often not carving up the data in the way you’d prefer, with weird omissions or gerrymandered boundaries you need to manually correct for. Once you have a hypothesis, a supervised method will likely give you better results.
I think this means SAEs could still be useful for generating hypotheses when trying to understand model behaviour, and and I really like the CLTs papers in this regard.[2] However, it’s still unclear whether they are better for hypothesis generation than alternative techniques, particularly techniques that have other advantages, like the ability to be used with limited model access (i.e. black-box techniques) or techniques that don’t require paying a large up-front cost before they can be used on a model.
I largely agree with your updates 1 and 2 above, although on 2 I still think it’s plausible that while many “why is the model doing X?” type questions can be answered with black-box techniques today, this may not continue to hold into the future, which is why I still view interp as a worthwhile research direction. This does make it important though to always try strong baselines on any new project and only get excited when interp sheds light on problems that genuinely seem hard to solve using these baselines.[3]
When I say unsupervised learning, I’m using this term in its conventional sense, e.g. clustering algorithms, manifold learning, etc; not in the sense of tasks like language model pre-training which I sometimes see referred to as unsupervised.
Particularly its emphasis on techniques to prune massive attribution graphs, improving tooling for making sense of the results, and accepting that some manual adjustment of the decompositions produced by CLTs may be necessary because we’re giving up on the idea that CLTs / SAEs are uncovering a “true basis”.
And it does seem that black box methods often suffice (in the sense of giving “good enough explanations” for whatever we need these explanations for) when we try to do this. Though this could just be—as you say—because of bad judgement. I’d definitely appreciate suggestions for better downstream tasks we should try!
I agree with most of this, especially
One thing I hadn’t been tracking very well that your comment made crisp to me is that many people (maybe most?) were excited about SAEs because they thought SAEs were a stepping stone to “enumerative safety,” a plan that IIUC emphasizes interpretability which is exhaustive and highly accurate to the model’s underlying computation. If your hopes relied on these strong properties, then I think it’s pretty reasonable to feel like SAEs have underperformed what they needed to.
Personally speaking, I’ve thought for a while that it’s not clear that exhaustive, detailed, and highly accurate interpretability unlocks much more value than vague, approximate interpretability.[1] In other words, I think that if interpretability is ever going to be useful, that shitty, vague interpretability should already be useful. Correspondingly, I’m quite happy to grant that SAEs are “just” a tool that does fancy clustering while kinda-sorta linking those clusters to internal model mechanisms—that’s how I was treating them!
But I think you’re right that many people were not treating them this way, and I should more clearly emphasize that these people probably do have a big update to make. Good point.
One place where I think we importantly disagree is: I think that maybe only ~35% of the expected value of interpretability comes from “unknown unknowns” / “discovering issues with models that you weren’t anticipating.” (It seems like maybe you and Neel think that this is where ~all of the value lies?)
Rather, I think that most of the value lies in something more like “enabling oversight of cognition, despite not having data that isolates that cognition.” In more detail, I think that some settings have structural properties that make it very difficult to use data to isolate undesired aspects of model cognition. A prosaic example is spurious correlations, assuming that there’s something structural stopping you from just collecting more data that disambiguates the spurious cue from the intended one. Another example: It might be difficult to disambiguate the “tell the human what they think is the correct answer” mechanism from the “tell the human what I think is the correct answer” mechanism. I write about this sort of problem, and why I think interpretability might be able to address it, here. And AFAICT, I think it really is quite different—and more plausibly interp-advantaged—than “unknown unknowns”-type problems.
To illustrate the difference concretely, consider the Bias in Bios task that we applied SHIFT to in Sparse Feature Circuits. Here, IMO the main impressive thing is not that interpretability is useful for discovering a spurious correlation. (I’m not sure that it is.) Rather, it’s that—once the spurious correlation is known—you can use interp to remove it even if you do not have access to labeled data isolating the gender concept.[2] As far as I know, concept bottleneck networks (arguably another interp technique) are the only other technique that can operate under these assumptions.
Just to establish the historical claim about my beliefs here:
Here I described the idea that turned into SHIFT as “us[ing] vague understanding to guess which model components attend to features which are spuriously correlated with the thing you want, then use the rest of the model as an improved classifier for the thing you want”.
After Sparse Feature Circuits came out, I wrote in private communications to Neel “a key move I did when picking this project was ‘trying to figure out what cool applications were possible even with small amounts of mechanistic insight.’ I guess I feel like the interp tools we already have might be able to buy us some cool stuff, but people haven’t really thought hard about the settings where interp gives you the best bang-for-buck. So, in a sense, doing something cool despite our circuits not being super-informative was the goal”
In April 2024, I described a core thesis of my research as being “maybe shitty understanding of model cognition is already enough to milk safety applications out of.”
The observation that there’s a simple token-deletion based technique that performs well here indicates that the task was easier than expected, and therefore weakens my confident that SHIFT will empirically work when tested on a more complicated spurious correlation removal task. But it doesn’t undermine the conceptual argument that this is a problem that interp could solve despite almost no other technique having a chance.
Is this a problem you expect to arise in practice? I don’t really expect it to arise, if you’re allowing for a significant amount of effort in creating that data (since I assume you’d also be putting a significant amount of effort into interpretability).