You know I was thinking ab this—say that there are two children and they’re orthogonal to the parent and each have probability 0.4 given the parent. If you imagine the space it looks like three clusters, two with probability 0.4, norm 1.4 and one with probability 0.2 and norm 1. They all have high cosine similarity with each other. From this frame, having the parent ‘include’ the children directions a bit doesn’t seem that inappropriate. One SAE latent setup that seems pretty reasonable is to actually have one parent latent that’s like “one of these three clusters is active” and three child latents pointing to each of the three clusters. The parent latent decoder in that setup would also include a bit of the child feature directions.
This is all sketchy though. It doesn’t feel like we have a good answer to the question “How exactly do we want the SAEs to behave in various scenarios?”
Yeah I think that’s right, the problem is that the SAE sees 3 very non-orthogonal inputs, and settles on something sort of between them (but skewed towards the parent). I don’t know how to get the SAE to exactly learn the parent only in these scenarios—I think if we can solve that then we should be in pretty good shape.
This is all sketchy though. It doesn’t feel like we have a good answer to the question “How exactly do we want the SAEs to behave in various scenarios?”
I do think the goal should be to get the SAE to learn the true underlying features, at least in these toy settings where we know what the true features are. If the SAEs we’re training can’t handle simple toy examples without superposition I don’t have a lot of faith that when we’re training SAEs on real LLM activations that the results are trustworthy.
You know I was thinking ab this—say that there are two children and they’re orthogonal to the parent and each have probability 0.4 given the parent. If you imagine the space it looks like three clusters, two with probability 0.4, norm 1.4 and one with probability 0.2 and norm 1. They all have high cosine similarity with each other. From this frame, having the parent ‘include’ the children directions a bit doesn’t seem that inappropriate. One SAE latent setup that seems pretty reasonable is to actually have one parent latent that’s like “one of these three clusters is active” and three child latents pointing to each of the three clusters. The parent latent decoder in that setup would also include a bit of the child feature directions.
This is all sketchy though. It doesn’t feel like we have a good answer to the question “How exactly do we want the SAEs to behave in various scenarios?”
Yeah I think that’s right, the problem is that the SAE sees 3 very non-orthogonal inputs, and settles on something sort of between them (but skewed towards the parent). I don’t know how to get the SAE to exactly learn the parent only in these scenarios—I think if we can solve that then we should be in pretty good shape.
I do think the goal should be to get the SAE to learn the true underlying features, at least in these toy settings where we know what the true features are. If the SAEs we’re training can’t handle simple toy examples without superposition I don’t have a lot of faith that when we’re training SAEs on real LLM activations that the results are trustworthy.