Removing High Frequency Latents from JumpReLU SAEs
On a first read, this doesn’t seem principled to me? How do we know those high-frequency latents aren’t, for example, basis directions for dense subspaces or common multi-dimensional features? In that case, we’d expect them to activate frequently and maybe appear pretty uninterpretable at a glance. Modifying the sparsity penalty to split them into lower frequency latents could then be pathological, moving us further away from capturing the features of the model even though interpretability scores might improve.
That’s just one illustrative example. More centrally, I don’t understand how this new penalty term relates to any mathematical definition that isn’t ad-hoc. Why would the spread of the distribution matter to us, rather than simply the mean? If it does matter to us, why does it matter in roughly the way captured by this penalty term?
The standard SAE sparsity loss relates to minimising the description length of the activations. I suspect that isn’t the right metric to optimise for understanding models, but it is at least a coherent, non-ad-hoc mathematical object.
EDIT: Oops, you address all that in the conclusion, I just can’t read.
I think it’s pretty plausible that something pathological like that is happening. We’re releasing this as an interesting idea that others might find useful for their use case, not as something we’re confident is a superior method. If we were continuing with SAE work, we would likely sanity check it more but we thought it better to release it than not
Sure, I agree that, as we point out in the post, this penalty may not be targeting the right thing, or could be targeting it in the wrong way. We shared this more as a proof of concept that others may like to build on and don’t claim it’s a superior solution to standard JumpReLU training.
A minor quibble on the ad-hoc point: while I completely agree about the pitfalls of ad-hoc definitions, I don’t think the same arguments apply about ad-hoc training procedures. As long as your evaluation metrics measure the thing you actually care about, ML has a long history of ad-hoc approaches to optimising those metrics performing surprisingly well. Having said that though, I agree it would be great to see more research into what’s really going on with these dense features, and this leading into a more principled approach to dealing with them! (Whether that turns out to be better understanding how to interpret them or improving SAE training to fix them.)
Yes, sorry I missed that. The section is titled ‘Conclusions’ and comes at the end of the post, so I guess I must have skipped over it because I thought it was the post conclusion section rather than the high-frequency latents conclusion section.
As long as your evaluation metrics measure the thing you actually care about...
I agree with this. I just don’t think those autointerp metrics robustly capture what we care about.
On a first read, this doesn’t seem principled to me? How do we know those high-frequency latents aren’t, for example, basis directions for dense subspaces or common multi-dimensional features? In that case, we’d expect them to activate frequently and maybe appear pretty uninterpretable at a glance. Modifying the sparsity penalty to split them into lower frequency latents could then be pathological, moving us further away from capturing the features of the model even though interpretability scores might improve.
That’s just one illustrative example. More centrally, I don’t understand how this new penalty term relates to any mathematical definition that isn’t ad-hoc. Why would the spread of the distribution matter to us, rather than simply the mean? If it does matter to us, why does it matter in roughly the way captured by this penalty term?
The standard SAE sparsity loss relates to minimising the description length of the activations. I suspect that isn’t the right metric to optimise for understanding models, but it is at least a coherent, non-ad-hoc mathematical object.
EDIT: Oops, you address all that in the conclusion, I just can’t read.
I think it’s pretty plausible that something pathological like that is happening. We’re releasing this as an interesting idea that others might find useful for their use case, not as something we’re confident is a superior method. If we were continuing with SAE work, we would likely sanity check it more but we thought it better to release it than not
Sure, I agree that, as we point out in the post, this penalty may not be targeting the right thing, or could be targeting it in the wrong way. We shared this more as a proof of concept that others may like to build on and don’t claim it’s a superior solution to standard JumpReLU training.
A minor quibble on the ad-hoc point: while I completely agree about the pitfalls of ad-hoc definitions, I don’t think the same arguments apply about ad-hoc training procedures. As long as your evaluation metrics measure the thing you actually care about, ML has a long history of ad-hoc approaches to optimising those metrics performing surprisingly well. Having said that though, I agree it would be great to see more research into what’s really going on with these dense features, and this leading into a more principled approach to dealing with them! (Whether that turns out to be better understanding how to interpret them or improving SAE training to fix them.)
Yes, sorry I missed that. The section is titled ‘Conclusions’ and comes at the end of the post, so I guess I must have skipped over it because I thought it was the post conclusion section rather than the high-frequency latents conclusion section.
I agree with this. I just don’t think those autointerp metrics robustly capture what we care about.