Dictionary/SAE learning on model activations is bad as anomaly detection because you need to train the dictionary on a dataset, which means you needed the anomaly to be in the training set.
How to do dictionary learning without a dataset? One possibility is to use uncertainty-estimation-like techniques to detect when the model “thinks its on-distribution” for randomly sampled activations.
You may be able to notice data points where the SAE performs unusually badly at reconstruction? (Which is what you’d see if there’s a crucial missing feature)
Would you expect this to outperform doing the same thing with a non-sparse autoencoder (that has a lower latent dimension than the NN’s hidden dimension)? I’m not sure why it would, given that we aren’t using the sparse representations except to map them back (so any type of capacity constraint on the latent space seems fine). If dense autoencoders work just as well for this, they’d probably be more straightforward to train? (unless we already have an SAE lying around from interp anyway, I suppose)
Regular AE’s job is to throw away the information outside some low-dimensional manifold, sparse ~linear AE’s job is to throw away the information not represented by sparse dictionary codes. (Also a low-dimensional manifold, I guess, just made from a different prior.)
If an AE is reconstructing poorly, that means it was throwing away a lot of information. How important that information is seems like a question about which manifold the underlying network “really” generalizes according to. And also what counts as an anomaly / what kinds of outliers you’re even trying to detect.
I think this is an important point, but IMO there are at least two types of candidates for using SAEs for anomaly detection (in addition to techniques that make sense for normal, non-sparse autoencoders):
Sometimes, you may have a bunch of “untrusted” data, some of which contains anomalies. You just don’t know which data points have anomalies on this untrusted data. (In addition, you have some “trusted” data that is guaranteed not to have anomalies.) Then you could train an SAE on all data (including untrusted) and figure out what “normal” SAE features look like based on the trusted data.
Even for an SAE that’s been trained only on normal data, it seems plausible that some correlations between features would be different for anomalous data, and that this might work better than looking for correlations in the dense basis. As an extreme version of this, you could look for circuits in the SAE basis and use those for anomaly detection.
Overall, I think that if SAEs end up being very useful for mech interp, there’s a decent chance they’ll also be useful for (mechanistic) anomaly detection (a lot of my uncertainty about SAEs applies to both possible applications). Definitely uncertain though, e.g. I could imagine SAEs that are useful for discovering interesting stuff about a network manually, but whose features aren’t the right computational units for actually detecting anomalies. I think that would make SAEs less than maximally useful for mech interp too, but probably non-zero useful.
Even for an SAE that’s been trained only on normal data [...] you could look for circuits in the SAE basis and use those for anomaly detection.
Yeah, this seems somewhat plausible. If automated circuit-finding works it would certainly detect some anomalies, though I’m uncertain if it’s going to be weak against adversarial anomalies relative to regular ol’ random anomalies.
Dictionary/SAE learning on model activations is bad as anomaly detection because you need to train the dictionary on a dataset, which means you needed the anomaly to be in the training set.
How to do dictionary learning without a dataset? One possibility is to use uncertainty-estimation-like techniques to detect when the model “thinks its on-distribution” for randomly sampled activations.
You may be able to notice data points where the SAE performs unusually badly at reconstruction? (Which is what you’d see if there’s a crucial missing feature)
Would you expect this to outperform doing the same thing with a non-sparse autoencoder (that has a lower latent dimension than the NN’s hidden dimension)? I’m not sure why it would, given that we aren’t using the sparse representations except to map them back (so any type of capacity constraint on the latent space seems fine). If dense autoencoders work just as well for this, they’d probably be more straightforward to train? (unless we already have an SAE lying around from interp anyway, I suppose)
Regular AE’s job is to throw away the information outside some low-dimensional manifold, sparse ~linear AE’s job is to throw away the information not represented by sparse dictionary codes. (Also a low-dimensional manifold, I guess, just made from a different prior.)
If an AE is reconstructing poorly, that means it was throwing away a lot of information. How important that information is seems like a question about which manifold the underlying network “really” generalizes according to. And also what counts as an anomaly / what kinds of outliers you’re even trying to detect.
Ah, yeah, that makes sense.
I think this is an important point, but IMO there are at least two types of candidates for using SAEs for anomaly detection (in addition to techniques that make sense for normal, non-sparse autoencoders):
Sometimes, you may have a bunch of “untrusted” data, some of which contains anomalies. You just don’t know which data points have anomalies on this untrusted data. (In addition, you have some “trusted” data that is guaranteed not to have anomalies.) Then you could train an SAE on all data (including untrusted) and figure out what “normal” SAE features look like based on the trusted data.
Even for an SAE that’s been trained only on normal data, it seems plausible that some correlations between features would be different for anomalous data, and that this might work better than looking for correlations in the dense basis. As an extreme version of this, you could look for circuits in the SAE basis and use those for anomaly detection.
Overall, I think that if SAEs end up being very useful for mech interp, there’s a decent chance they’ll also be useful for (mechanistic) anomaly detection (a lot of my uncertainty about SAEs applies to both possible applications). Definitely uncertain though, e.g. I could imagine SAEs that are useful for discovering interesting stuff about a network manually, but whose features aren’t the right computational units for actually detecting anomalies. I think that would make SAEs less than maximally useful for mech interp too, but probably non-zero useful.
Yeah, this seems somewhat plausible. If automated circuit-finding works it would certainly detect some anomalies, though I’m uncertain if it’s going to be weak against adversarial anomalies relative to regular ol’ random anomalies.