[Redwood Research] Causal Scrubbing

In this sequence, we introduce causal scrubbing, a principled approach for evaluating the quality of mechanistic interpretations. The key insight behind this work is that mechanistic interpretability hypotheses can be thought of as defining what activations inside a neural network can be resampled without affecting behavior. Accordingly, causal scrubbing tests interpretability hypotheses via behavior-preserving resampling ablations—converting hypotheses into distributions over activations that should preserve behavior, and checking if behavior is actually preserved.

We apply this method to develop a refined understanding of how a small language model implements induction and how an algorithmic model correctly classifies if a sequence of parentheses is balanced.

Besides the main post, which covers the most important content, there are three additional posts with information of less general interest. The first is a series of appendices to the content of this post. Then, a pair of posts covers the details of what we discovered while applying causal scrubbing to a paren-balance checker and induction in a small language model.

Causal Scrub­bing: a method for rigor­ously test­ing in­ter­pretabil­ity hy­pothe­ses [Red­wood Re­search]

Causal scrub­bing: Appendix

Causal scrub­bing: re­sults on in­duc­tion heads

Causal scrub­bing: re­sults on a paren bal­ance checker