Agreed, chance of success when cold emailing busy people is low, and spamming them is bad. And there are alternate approaches that may work better, depending on the person and their setup—some Youtubers don’t have a manager or employees, some do. I also think being able to begin an email with “Hi, I run the DeepMind mechanistic interpretability team” was quite helpful here.
Neel Nanda
Open Source Replication of Anthropic’s Crosscoder paper for model-diffing
SAE features for refusal and sycophancy steering vectors
The high level claim seems pretty true to me. Come to the GDM alignment team, it’s great over here! It seems quite important to me that all AGI labs have good safety teams
Thanks for writing the post!
Huh, are there examples of right leaning stuff they stopped funding? That’s new to me
Base LLMs refuse too
+1. Concretely this means converting every probability p into p/(1-p), and then multiplying those (you can then convert back to probabilities)
Intuition pump: Person A says 0.1 and Person B says 0.9. This is symmetric, if we instead study the negation, they swap places, so any reasonable aggregation should give 0.5
Geometric mean does not, instead you get 0.3
Arithmetic gets 0.5, but is bad for the other reasons you noted
Geometric mean of odds is sqrt(1/9 * 9) = 1, which maps to a probability of 0.5, while also eg treating low probabilities fairly
Interesting thought! I expect there’s systematic differences, though it’s not quite obvious how. Your example seems pretty plausible to me. Meta SAEs are also more incentived to learn features which tend to split a lot, I think, as then they’re useful for more predicting many latents. Though ones that don’t split may be useful as they entirely explain a latent that’s otherwise hard to explain.
Anyway, we haven’t checked yet, but I expect many of the results in this post would look similar for eg sparse linear regression over a smaller SAEs decoder. Re why meta SAEs are interesting at all, they’re much cheaper to train than a smaller SAE, and BatchTopK gives you more control over the L0 than you could easily get with sparse linear regression, which are some mild advantages, but you may have a small SAE lying around anyway. I see the interesting point of this post more as “SAE latents are not atomic, as shown by one method, but probably other methods would work well too”
What’s wrong with twitter as an archival source? You can’t edit tweets (technically you can edit top level tweets for up to an hour, but this creates a new URL and old links still show the original version). Seems fine to just aesthetically dislike twitter though
To me, this model predicts that sparse autoencoders should not find abstract features, because those are shards, and should not be localisable to a direction in activation space on a single token. Do you agree that this is implied?
If so, how do you square that with eg all the abstract features Anthropic found in Sonnet 3?
Thanks for making the correction!
I expect there’s lots of new forms of capabilities elicitation for this kind of model, which their standard framework may not have captured, and which requires more time to iterate on
Thanks for the post!
sample five random users’ forecasts, score them, and then average
Are you sure this is how their bot works? I read this more as “sample five things from the LLM, and average those predictions”. For Metaculus, the crowd is just given to you, right, so it seems crazy to sample users?
Yeah, fair point, disagreement retracted
I think this is important to define anyway! (and likely pretty obvious). This would create a lot more friction for someone to take on such a role though, or move out
But only a small fraction work on evaluations, so the increased cost is much smaller than you make out
Cool work! This is the outcome I expected, but I’m glad someone actually went and did it
Yeah, if I made an introduction it would ruin the spirit of it!
I don’t see important differences between that and ce loss delta in the context Lucius is describing
This is somewhat similar to the approach of the ROME paper, which has been shown to not actually do fact editing, just inserting louder facts that drown out the old ones and maybe suppressing the old ones.
In general, the problem with optimising model behavior as a localisation technique is that you can’t distinguish between something that truly edits the fact, and something which adds a new fact in another layer that cancels out the first fact and adds something new.