I was actually expecting Penny to develop dystonia coincidentally, and the RL would tie-in by needing to be learned in reverse ie optimizing from dystonic to normal. It is a much more pleasant ending than the protagonist’s tone the whole way through.
If I was writing a fanfic of this, I’d keep the story as is (+ or—the last paragraph), but then continue into the present moment which leads to the realization.
Great work!
Listened to a talk from Philipp on it today and am confused on why we can’t just make a better benchmark than LDS?
Why not just train eg 1k different models, where you left 1 datapoint out? LDS is noisy, so I’m assuming 1k datapoints that exactly capture what you want is better than 1M datapoints that are an approximation. [1]
As an estimate, Nano-GPT speedrun takes a little more than 2 min now, so you can train 1001 of these in:
2.33*1k/60 = 38hrs on 8 H100′s which is maybe 4 b200′s which is $24/hr, so ~$1k.
And that’s getting a 124M param LLM trained on 730M tokens up to GPT2 level. Y’all’s quantitative setting for Fig 4 was a 2M parameter Resnet on Cifar-10 on 5k images, which would be much cheaper to do (although the GPT2 one has been very optimized, so you could just do the speedrun one but on less data).
LDS was shown to be very noisy, but a colleague mentioned that this could be because 5k images is a very small amount of data. I guess another way to validate LDS is running the expensive full-training run on a few datapoints.
Confusion on LDS Hyperparameter Sweep Meaning
Y’all show in Fig 4 that there are large error bars across seeds for the different methods. This ends up being a property of LDS’s noisiness, as y’all show in Figures 7-8 (where BIF & EK-FAC are highly correlated). This means that, even using noisy LDS, you don’t need to re-run 5 times if a new method is much better than previous ones (but only if it’s narrowly better).
What I’m confused about is why you retrained on 100 different ways to resample the data at each percentage? Is this just because LDS is noisy, so you’re doing the thing where randomly sampling 100 datapoints 500 times gives you a good approximation of the causal effect of each individual datapoint (or that is what LDS actually is)? Was there high variance in the relative difference between methods across the 100 retrained models?
Other Experiments
Just wild speculation that there are other data attribution methods as opposed to prediction of the output. When a model “groks” something, there will be some datapoints that were more important for that happening that should show up in an ideal data attribution method.
Similar w/ different structures forming in the dataset (which y’all’s other paper shows AFAIK).
[Note: there’s a decent chance I’ve terribly misunderstood y’all’s technique or misread the technical details, so corrections are appreciated]
It initially seemed confusing on how to evaluate this, but I think we need to look at the variance over the distribution of datapoints. If BIF is consistently more accurate than EK-FAC over eg 100 randomly sampled datapoints, then that’s a good sign for BIF; however, if there’s a high level of variance, then we’d need more data to differentiate between the two. I do think higher quality data attribution methods would have higher signal, so you’d need less data. For example, I predict that BIF does better than Trak on ~all datapoints (but this is an empirical question).