In this paper, we report an architectural change which appears to substantially increase the fraction of MLP neurons which appear to be “interpretable” (i.e. respond to an articulable property of the input), at little to no cost to ML performance. Specifically, we replace the activation function with a softmax linear unit (which we term SoLU) and show that this significantly increases the fraction of neurons in the MLP layers which seem to correspond to readily human-understandable concepts, phrases, or categories on quick investigation, as measured by randomized and blinded experiments. We then study our SoLU models and use them to gain several new insights about how information is processed in transformers. However, we also discover some evidence that the superposition hypothesis is true and there is no free lunch: SoLU may be making some features more interpretable by “hiding” others and thus making them even more deeply uninterpretable. Despite this, SoLU still seems like a net win, as in practical terms it substantially increases the fraction of neurons we are able to understand.
I have started looking into this myself because I think it is heavily understudied post-GPT-3. The vibe I remember is that interpretable ML with non-black-box models seemed to take up more attention in the ML community prior to ~2019. At some point, it seems people conceded the black-box power and it became about interpreting black-box models.
It’s possible that GPT models, while powerful, are a goobly mess that makes things just way too difficult to interpret the kinds of things we would like to interpret. We don’t necessarily need to interpret everything about a model, we just need to interpret the parts that matter for preventing catastrophe.
The main issue for this kind of work is that some people might have the assumption that you will suffer too much of an alignment tax (on performance) for interpretable models. People are gonna gravitate towards the more powerful models. You’d need to create an architectural setup that scales at least just as well as GPT models.
People have also tried to engineer monosemancity in models, but I don’t think this is viable because I expect it loses out too much on performance.
One example by Anthropic:
I have started looking into this myself because I think it is heavily understudied post-GPT-3. The vibe I remember is that interpretable ML with non-black-box models seemed to take up more attention in the ML community prior to ~2019. At some point, it seems people conceded the black-box power and it became about interpreting black-box models.
It’s possible that GPT models, while powerful, are a goobly mess that makes things just way too difficult to interpret the kinds of things we would like to interpret. We don’t necessarily need to interpret everything about a model, we just need to interpret the parts that matter for preventing catastrophe.
The main issue for this kind of work is that some people might have the assumption that you will suffer too much of an alignment tax (on performance) for interpretable models. People are gonna gravitate towards the more powerful models. You’d need to create an architectural setup that scales at least just as well as GPT models.
People have also tried to engineer monosemancity in models, but I don’t think this is viable because I expect it loses out too much on performance.