I strongly agree with this, and I hope more people in mech. interp. become aware of this. I would actually emphasize that in my opinion it’s not just that it’s hard to do computational science on brains, but that we don’t have the right framework. Some weak evidence for this is exactly that we have an intelligent system that has existed for a few years now where experiments and analyses are easy to do, and we can see how far we’ve gotten with the CNC approach.
I think we’re on the same page that we might not have the right framework to do computational science on brains or other intelligent systems. I think we might disagree on how far away current mainstream ideas are from being the right framework—I’d predict that, if we talked it out further, I’d say we’re closer than you’d say we are. I don’t know how far afield from current ideas that we need to look for the right framework, and I’d support work that looks even further afield than several inferential steps from current mainstream ideas. But I don’t think the historical sluggish pace of computational neuroscience justifies search any particular inferential distance; more proximal solutions feel just as likely to be the next paradigm/wave than more distant solutions (maybe more likely given the social nature of what constitutes a paradigm/wave).
My main point of confusion with this post has to do with Parameter Decomposition as a new paradigm.
I really want to re-emphasize that I didn’t call PD a new paradigm (or even a new ‘wave’) in the post. N.B.: “I’ll emphasize that these are early ideas and certainly do not yet constitute ‘Third-Wave Mech Interp’. ”
I haven’t thought about this technique much, but on a first reading it doesn’t sound all that different from what you call the second wave paradigm, just replacing activations with parameters. For instance, I think I could take most of the last few sections of this post and rewrite it to make the point. Just for fun I’ll try this out here, trying to argue for a new paradigm called “Activation Decomposition”. (just to be super clear I don’t think this is a new paradigm!)
Yeah I don’t think PD throws away the majority of the ideas in the 2nd wave. It’s designed primarily to solve the anomallies of the 2nd wave. It will therefore resemble 2nd wave ideas and we can draw analogies. But I think it’s different in important ways. For one, I think it will probably help us be less confused about ideas like ‘feature’, ‘representation’, ‘circuit’, and so on.
> Some work in this early period was quite neuroscience-adjacent; as a result, despite being extremely mechanistic in flavour, some of this work may have been somewhat overlooked e.g. Sussillo and Barak (2013).
This is a nitpick, and I don’t think any of your main points rests on this, but I think the main reason this work was not used in any type of artificial neural network interp work at that time was that it is fundamentally only applicable to recurrent systems, and probably impossible to apply to e.g. standard convolutional networks. It’s not even straightforward to apply to a lot of the types of recurrent systems used in AI today (to the extent they are even used), but probably one could push on that a bit with some effort.
Yes this is fair. These are still fairly deep neural networks, though (if we count time as depth), and they’re examples of work that interprets ANNs on the lowest level using low-level analysis of weights and activations using e.g. dimensionality reduction and other methods mech interp folks might find familiar. But I agree it doesn’t usually get put in the bucket of ‘mech interp’ though ultimately the boundary is fairly arbitrary. As a separate point, it is surprising how little of the neuroscience community has actually jumped onto mechanistically understanding more interesting models like inception v2 or LLMs despite the similarity of methods and object of study, which is a testament to the early mech interp pioneers since they saw a field where few others did.
As a final question, I am wondering what you think the implications for what people should be doing are if mech interp is or is not pre-paradigmatic? Is there a difference between mech interp being in a not-so-great-paradigm vs. pre-paradigmatic in terms of what your median researcher should be thinking/doing/spending time on? Or is this just an intellectually interesting thing to think about. I am guessing that when a lot of people say that mech interp is pre paradigmatic they really mean something closer to “mech interp doesn’t have a useful/good/perfect paradigm right now”. But I’m also not sure if there’s anything here beyond semantics
I’m not actually sure if this is very action-relevant. I think in the past I might have said “mech interp practitioners should be more familiar with computational neuroscience/connectionism”, since I think this might have save the mech interp community some time. But I don’t think it would have saved a huge amount of time, and I think the mech interp has largely surpassed comp neuro as a source of interesting and relevant ideas. I think it’s mostly useful as an exercise in situating mech interp ideas within the broader set of ideas of an eminently related field (comp neuro/connectionism). But I’ll stress that many in the field see mech interp as better contextualized by other sets of broader ideas (e.g. as a subfield of interpretability/ML), and when viewing mech interp in light of those ideas, it might better be thought of as pre-paradigmatic. I think that’s a completely compatible but different perspective from the one I tend to take, and just emphasizes the subjectiveness of the whole question of whether the field is paradigmatic or not.
Hey Adam, thanks for your thoughts on this!
I think we’re on the same page that we might not have the right framework to do computational science on brains or other intelligent systems. I think we might disagree on how far away current mainstream ideas are from being the right framework—I’d predict that, if we talked it out further, I’d say we’re closer than you’d say we are. I don’t know how far afield from current ideas that we need to look for the right framework, and I’d support work that looks even further afield than several inferential steps from current mainstream ideas. But I don’t think the historical sluggish pace of computational neuroscience justifies search any particular inferential distance; more proximal solutions feel just as likely to be the next paradigm/wave than more distant solutions (maybe more likely given the social nature of what constitutes a paradigm/wave).
I really want to re-emphasize that I didn’t call PD a new paradigm (or even a new ‘wave’) in the post. N.B.: “I’ll emphasize that these are early ideas and certainly do not yet constitute ‘Third-Wave Mech Interp’. ”
Yeah I don’t think PD throws away the majority of the ideas in the 2nd wave. It’s designed primarily to solve the anomallies of the 2nd wave. It will therefore resemble 2nd wave ideas and we can draw analogies. But I think it’s different in important ways. For one, I think it will probably help us be less confused about ideas like ‘feature’, ‘representation’, ‘circuit’, and so on.
Yes this is fair. These are still fairly deep neural networks, though (if we count time as depth), and they’re examples of work that interprets ANNs on the lowest level using low-level analysis of weights and activations using e.g. dimensionality reduction and other methods mech interp folks might find familiar. But I agree it doesn’t usually get put in the bucket of ‘mech interp’ though ultimately the boundary is fairly arbitrary. As a separate point, it is surprising how little of the neuroscience community has actually jumped onto mechanistically understanding more interesting models like inception v2 or LLMs despite the similarity of methods and object of study, which is a testament to the early mech interp pioneers since they saw a field where few others did.
I’m not actually sure if this is very action-relevant. I think in the past I might have said “mech interp practitioners should be more familiar with computational neuroscience/connectionism”, since I think this might have save the mech interp community some time. But I don’t think it would have saved a huge amount of time, and I think the mech interp has largely surpassed comp neuro as a source of interesting and relevant ideas. I think it’s mostly useful as an exercise in situating mech interp ideas within the broader set of ideas of an eminently related field (comp neuro/connectionism). But I’ll stress that many in the field see mech interp as better contextualized by other sets of broader ideas (e.g. as a subfield of interpretability/ML), and when viewing mech interp in light of those ideas, it might better be thought of as pre-paradigmatic. I think that’s a completely compatible but different perspective from the one I tend to take, and just emphasizes the subjectiveness of the whole question of whether the field is paradigmatic or not.
This all sounds very reasonable to me! Thanks for the response. I agree that we are likely quite aligned about a lot of these issues.