Before concretely addressing the oversights you found, perhaps worth mentioning the intuitive picture motivating the pseudo-code. I wanted to make explicit the scientific process which happens between researchers. M1 plays the role of the methods researcher, M2 plays the role of the applications/datasets researcher. The pseudo-code is an attempt to write out crisply in what sense ‘good’ outputs from M1 can pass the test-of-time standing up to more realistic, new applications and baselines developed by M2.
On to the details:
Thanks for working with my questionable notation here! Indeed the uses of I were overloaded here, and I have now (hopefully) clarified by writing v_{I,M} for what was previously I(M). The type signatures I have in mind are that I is code (an interpretability method) and v_{I,M} and I(M) are some efficiently queryable representation of M (circuits, SAE weights, …) useful for downstream tasks.
The sense in which M2 is static does seem important. In fact, I think M2 should have some access to M—it was an oversight that M does not appear as an input to M2. This was why I mentioned sample complexity in a footnote: It seems reasonable to give M2 limited query access to M. Thanks for catching this. In fact, perhaps the scheme could work as originally written where M2 does not have direct access to M, but I’m unsure seems too ‘static’ as you say.
Regarding the appropriateness of the term interpretability to describe the target of this automation process: I agree, the output may not be an interp method in our current sense. Interpretability is the most appropriate term I could come up with. Two features seem important here: (1) white-box parsing of weights is central. (2) The resulting ‘interpretation’ v_{I,M} is usable by a fixed model M2, hence v_{I,M} must be efficiently interface-able without having learned—in weights—the structure of v_{I,M}.
I found working through the details of this very informative. For what it’s worth, I’ll share here a comment I made internally at Timaeus about it, which is that in some ways this factorisation into M1 and M2 reminds me of the factorisation into the map m↦Sm from a model to its capability vector (this being the analogue of M2) and the map Sm↦σ−1(Em)=βTSm+α from capability vectors to downstream metrics (this being the analogue of M1) in Ruan et al’s observational scaling laws paper.
In your case the output metrics have an interesting twist, in that you don’t want to just predict performance but also in some sense variations of performance within a certain class (by e.g. varying the prompt), so it’s some kind of “stable” latent space of capabilities that you’re constructing.
Anyway, factoring the prediction of downstream performance/capabilities through some kind of latent space object I(M) in your case, or latent spaces of capabilities in Ruan et al’s case, seems like a principled way of thinking about the kind of object we want to put at the center of interpretability.
As an entertaining aside: as an algebraic geometer the proliferation of I1(M),I2(M),…’s i.e. “interpretability objects” between models M and downstream performance metrics reminds me of the proliferation of cohomology theories and the search for “motives” to unify them. That is basically interpretability for schemes!
Before concretely addressing the oversights you found, perhaps worth mentioning the intuitive picture motivating the pseudo-code. I wanted to make explicit the scientific process which happens between researchers. M1 plays the role of the methods researcher, M2 plays the role of the applications/datasets researcher. The pseudo-code is an attempt to write out crisply in what sense ‘good’ outputs from M1 can pass the test-of-time standing up to more realistic, new applications and baselines developed by M2.
On to the details:
Thanks for working with my questionable notation here! Indeed the uses of I were overloaded here, and I have now (hopefully) clarified by writing v_{I,M} for what was previously I(M). The type signatures I have in mind are that I is code (an interpretability method) and v_{I,M} and I(M) are some efficiently queryable representation of M (circuits, SAE weights, …) useful for downstream tasks.
The sense in which M2 is static does seem important. In fact, I think M2 should have some access to M—it was an oversight that M does not appear as an input to M2. This was why I mentioned sample complexity in a footnote: It seems reasonable to give M2 limited query access to M. Thanks for catching this. In fact, perhaps the scheme could work as originally written where M2 does not have direct access to M, but I’m unsure seems too ‘static’ as you say.
Regarding the appropriateness of the term interpretability to describe the target of this automation process: I agree, the output may not be an interp method in our current sense. Interpretability is the most appropriate term I could come up with. Two features seem important here: (1) white-box parsing of weights is central. (2) The resulting ‘interpretation’ v_{I,M} is usable by a fixed model M2, hence v_{I,M} must be efficiently interface-able without having learned—in weights—the structure of v_{I,M}.
Makes sense to me, thanks for the clarifications.
I found working through the details of this very informative. For what it’s worth, I’ll share here a comment I made internally at Timaeus about it, which is that in some ways this factorisation into M1 and M2 reminds me of the factorisation into the map m↦Sm from a model to its capability vector (this being the analogue of M2) and the map Sm↦σ−1(Em)=βTSm+α from capability vectors to downstream metrics (this being the analogue of M1) in Ruan et al’s observational scaling laws paper.
In your case the output metrics have an interesting twist, in that you don’t want to just predict performance but also in some sense variations of performance within a certain class (by e.g. varying the prompt), so it’s some kind of “stable” latent space of capabilities that you’re constructing.
Anyway, factoring the prediction of downstream performance/capabilities through some kind of latent space object I(M) in your case, or latent spaces of capabilities in Ruan et al’s case, seems like a principled way of thinking about the kind of object we want to put at the center of interpretability.
As an entertaining aside: as an algebraic geometer the proliferation of I1(M),I2(M),…’s i.e. “interpretability objects” between models M and downstream performance metrics reminds me of the proliferation of cohomology theories and the search for “motives” to unify them. That is basically interpretability for schemes!