working on neuronpedia
Johnny Lin
Neuronpedia
Announcing Neuronpedia: Platform for accelerating research into Sparse Autoencoders
Exploring OpenAI’s Latent Directions: Tests, Observations, and Poking Around
Hey Joseph (and coauthors),
Your directions are really fantastic. I hope you don’t mind, but I generated the activation data for the first 3000+ directions for each of the 12 layers and uploaded your directions to Neuronpedia:
https://www.neuronpedia.org/gpt2-small/res-jb
Your directions are also linked on the home page and the model page.
They’re also accessible by layer (sorted by top activation), eg layer 6: https://neuronpedia.org/gpt2-small/6-res-jb
I added the “Anthropic dashboard” to Neuronpedia for your dataset.
Explanations, comments, and autointerp scoring are also working—anyone can do this:
Click a direction and submit explanation on the top-left. Here’s another Star Wars direction (5-RES-JB:1681) where GPT4 gave me a score of 96:
Click the score for the scoring details:
I plan to do some autointerp explaining on a batch of these directions too.
Btw—your directions are so good that it’s easy to find super interesting stuff. 5-RES-JB:5 is about astronomy:
I’m aware that you’re going to do some library updates to get even better directions, and I’m excited for that—will re-generate/upload all layers after the new changes come in.
Things that I’m still working on and hope to get working in the next few days:
Making activation testing work for each neuron
“Search / test” the same way that we have search/test for OpenAI’s directions
Again, your directions look fantastic—congrats. I hope this is useful/interesting for you and anyone trying to browse/explain them. Also, I didn’t know how to provide a citation/reference to you (and your team?) so I just used RES-JB = Residuals by Joseph Bloom and included links to all relevant sources on your directions page.
If there’s anything you’d like me to modify about this, or any feature you’d like me to add to make it better, please do not hesitate to let me know.
Hi duck_master, thank you for playing and appreciate the tip. Maybe it’s worth compiling these tips and putting it under a “tips” popup/page on the main site. Also—please consider joining the Discord if you’re willing to offer more feedback and suggestions: https://discord.gg/kpEJWgvdAx
Apologies for the limit. It currently costs ~$0.24 to do each explanation score and it’s coming from my personal funds, so I’m capping it daily until I can hopefully get approved for a grant. A few hours ago I raised the limit from 3 new explanations per day to 10 new explanations per day.
Hi Jennifer,
Thanks for participating—my apologies for only having GitHub login at the moment. Please feel free to create a throwaway Github account if you’d still like to play (I think Github allows you to use disposable emails to sign up—I had no problem creating an account using an iCloud disposable email). Email/password login is definitely on the TODO.
That’s a good idea—I think maybe I could make a “drafts” explanation list so you can queue it up for later. Unfortunately since the website just launched, there is not yet a reasonable threshold for “voted highly” since most explanations have none or very few explanations. But this is a good workaround for when the site is a bit older.
Re: multiple meanings—this is interesting. I need to experiment with this more, but I don’t think you need to use any special syntax. By writing “the letter c in a word or names of towns near Berlin”, it should give you a score based on both of those. There is a related question of, should these neurons should have two highly-voted/rated explanations, or one highly-voted/rated explanation that has both explanations? I’ll put that on the TODO as an open question.
EDIT: after thinking about this a bit more-
PRO of multiple separate explanations: if a neuron has 4-5 different meanings it can get unwieldy quickly (and then users might submit an one that is identical, except just swapping the order of each OR explanation)
CON of multiple separate explanations—we probably need ranked choice voting or multi-selection at some point… will put this on the TODO.Btw—would love to have you in the discord to stay updated and provide additional feedback for Neuronpedia! This is super helpful.
Thanks so much for the feedback! Inline below:
Conceptual Feedback:
I think it would be better if I could see two explanations and vote on which one I like better (when available).
When there are multiple explanations, Neuronpedia does display them.
However I’ve considered a different game mode where all you do is choose between This Vs That (no skipping, no new explanations). That may be a cool possibility!
Attention heads are where a lot of the interesting stuff is happening, and need lots of interpretation work. Hopefully this sort of approach can be extended to that case.
Will put it on the TODO
The three explanation limit kicked in just as I was starting to get into it. Hopefully you can get funding to allow for more, but in the meantime I would have budgeted my explanations more carefully if I had known this.
Sorry, the limit is daily, you can come back tomorrow. It currently costs $0.24 to do one explanation score.
Good idea re: showing limit on number of explanations somehow.
I don’t feel like I should get a point for skipping, it makes the points feel meaningless.
Yeah I struggled with this a bit. But I didn’t want to incentivize people to vote for a bad explanation. E.g, if you only get a point for voting, then you’re more inclined (even subconsciously) to vote.
I’m open to being wrong on this. I’m not a game mechanics expert and happy to change it.
UX Feedback:
I didn’t realize that clicking on the previous explanation would cast a vote and take me to the next question. I wanted to go back but I didn’t see a way to do that.
Great suggestion. Will add it to TODO.
After submitting a new explanation and seeing that I didn’t beat the high score, I wanted to try submitting a better explanation, but it glitched out and skipped to the next question.
Hmm I’ll try to repro this. Thanks for reporting.
I would like to know whether the explanation shown was the GPT-4 created one, or submitted by a user.
If you click “Simple” at the top right to toggle to Advanced Mode, it will show you the author and score of the explanations being shown.
The blue area at the bottom takes up too much space at the expense of the main area (with the text samples).
Yes, I havent had time to optimize this. Currently it has that space because it will fit 3 explanations, and I wanted the UI to stay more static (and not “jump around”) based on the number of explanations. But you are right that this is annoying wasted space most of the time.
It would be nice to be able to navigate to adjacent or related neurons from the neuron’s page.
Good idea. Added to TODO.
hey mako—sorry about the issues. i’m looking into it right now. will update asap
edit: looks like the EC2 instance hard crashed. i can’t even restart it from AWS console. i am starting up a new instance with more RAM.
edit2: confirmed via syslog (after taking a long time to restart the old server) it was OOM. new machine has 8x more ram. added monitoring and will investigate potential memory leaks tomorrow
Hi Martin,
Thanks for playing! I agree there is some risk of confirmation bias, and the option to hide explanations by default is very interesting.
The reason it is designed the way it is now is because I’d prefer to avoid too many duplicate explanations. Currently, you can only submit explanations that are not exact duplicates, though you can submit explanations that are very similar -e.g, “banana” vs “bananas”.
The first downside would be that duplicate explanations may clutter up the voting options. The second downside is when someone is looking at the two explanations later, the vote may be split between the two similar explanations—meaning a third explanation that is worse might actually win (e.g, “cherry” vs “banana(s)”).
HOWEVER—those are not insurmountable downsides. the server just has to have a better duplicate/similarity check (maybe even asking GPT4), like check for plurals—and if you explain similarly to an existing explanation, it just automatically upvotes that. I think it’s definitely worth experimenting. The similarity check would have to not be too loose, otherwise we may lose out on great explanations that appear to only be marginally different but actually score very differently.
Please keep the feedback coming and join the discord if you’d like to keep updated.
Hi Adam and thanks for your feedback / suggestion. Residual Viewer looks awesome. I have DMed you to chat more about it!
Thanks Callum and yep we’ve been extensively using SAE-Vis at Neuronpedia—it’s been extremely helpful for generating dashboards and it’s very well maintained. We’ll have a method of directly importing to Neuronpedia using the exports from SAE-Vis coming out soon.
Apparently an anonymous user(s) got really excited and ran a bunch of simultaneous searches while I was sleeping, triggering this open tokenizer bug/issue and causing our TransformerLens server to hang/crash. This caused some downtime.
A workaround has been implemented and pushed.
Great work Adam, especially on the customizability. It’s fascinating clicking through various types and indexes to look for patterns, and I’m looking forward to using this to find interesting directions.
Yes, this is a great idea. I think something other than “skip” is needed since skip isn’t declaring “this neuron has too many meanings or doesn’t seem to do anything”, which is actually useful information
Hi Nathan, thanks for playing and pointing out the issue. My apologies for the inappropriate text.
Half the text samples are from Open Web Text, which is scraped web data that GPT2 was trained on. I don’t know the exact details, but I believe some of it was reddit and other places.
If you DM me the neurons address next time you see them, I can start compiling a filter. I will also try to look for an open source library to categorize into safe and not safe.
My apologies again. This is a beta experiment, thanks for putting up with this while I fix the issues.
EDIT: this update was pushed just now. it will warn you on your first vote to confirm that you want to vote.
Thanks for playing, Chris!I’ll work on the voting thing. I’ll probably just add a “first-timer’s” warning on your first vote to ensure that you want to vote for that.
FYI—if you want to unvote, just go to your profile (neuronpedia.org/user/[username]), click the neuron you voted for, and click to unvote on the left side.
re: polysemanticity- have a big tweak to the game coming up that may help with this! i hope to get it out by early next week.
lol thanks. i can’t believe the link has been broken for so long on the site. it should be fixed in a few seconds from now. in the meantime if you’re interested: https://discord.gg/kpEJWgvdAx
Hey Jacob + Philippe,
Hope you all don’t mind but we put up layer 8 of your transcoders onto Neuronpedia, with ~22k dashboards here:
https://neuronpedia.org/gpt2-small/8-tres-dc
Each dashboard can be accessed at their own url:
https://neuronpedia.org/gpt2-small/8-tres-dc/0 goes to feature index 0.
You can also test each feature with custom text:
Or search all features at: https://www.neuronpedia.org/gpt2-small/tres-dc
An example search: https://www.neuronpedia.org/gpt2-small/?sourceSet=tres-dc&selectedLayers=[]&sortIndexes=[]&q=the%20cat%20sat%20on%20the%20mat%20at%20MATS
Unfortunately I wasn’t able to generate histograms, autointerp, or other layers for this yet. Am working on getting more layers up first.
Verification
I did spot checks of the first few dashboards and they seem to be correct. Please let me know if anything seems wrong or off. I am also happy to delete this comment if you do not find it useful or for any other reason—no worries.
Please let me know if you have any feedback or issues with this. I will be also reaching out directly via Slack.