You mention trans/cross-coders as possible solutions to the listed problems, but they also fall prey to issues 1 & 3, right?
Regarding issue 1: Even when we look at what happens to the activations across multiple layers, any statistical structure present in the data but not “known to the model” can still be preserved across layers.
For example: Consider a complicated curve in 2D space. If we have an MLP that simply rotates this 2D space, without any knowledge that the data falls on a curve, a Crosscoder trained on the pre-MLP & post-MLP residual stream would still decompose the curve into distinct features. Similarly, a Transcoder trained to predict the post-MLP from the pre-MLP residual stream would also use these distinct features and predict the rotated features from the non-rotated features.
Regarding issue 3: I also don’t see how trans/cross-coders help here. If we have multiple layers where the {blue, red} ⊗ {square, circle} decomposition would be possible, I don’t see why they would be more likely than classic SAEs to find this product structure rather than the composed representation.
Yes, that’s right—see footnote 10. We think that Transcoders and Crosscoders are directionally correct, in the sense that they leverage more of the models functional structure via activations from several sites, but agree that their vanilla versions suffer similar problems to regular SAEs.
Really liked this post!
Just for my understanding:
You mention trans/cross-coders as possible solutions to the listed problems, but they also fall prey to issues 1 & 3, right?
Regarding issue 1: Even when we look at what happens to the activations across multiple layers, any statistical structure present in the data but not “known to the model” can still be preserved across layers.
For example: Consider a complicated curve in 2D space. If we have an MLP that simply rotates this 2D space, without any knowledge that the data falls on a curve, a Crosscoder trained on the pre-MLP & post-MLP residual stream would still decompose the curve into distinct features. Similarly, a Transcoder trained to predict the post-MLP from the pre-MLP residual stream would also use these distinct features and predict the rotated features from the non-rotated features.
Regarding issue 3: I also don’t see how trans/cross-coders help here. If we have multiple layers where the {blue, red} ⊗ {square, circle} decomposition would be possible, I don’t see why they would be more likely than classic SAEs to find this product structure rather than the composed representation.
Yes, that’s right—see footnote 10. We think that Transcoders and Crosscoders are directionally correct, in the sense that they leverage more of the models functional structure via activations from several sites, but agree that their vanilla versions suffer similar problems to regular SAEs.