Nowadays I am informed about papers by Twitter threads, Slack channels, and going to talks / reading groups. All these are filters for true signal amidst the sea of noise.
Are there particular sources, eg twitter accounts, that you would recommend following? For other readers (I know Daniel already knows this one), the #papers-running-list channel on the AI Alignment slack is a really ongoing curation of AIS papers.
One source I’ve recently added and recommend is subscribing to individual authors on Semantic Scholar (eg here’s an author page).
Hmm I don’t think there are people I can single out from my following list that have high individual impact. IMO it’s more that the algorithm has picked up on the my trend of engagement and now gives me great discovery.
For someone else to bootstrap this process and give maximum signal to the algorithm, the best thing to do might just be to follow a bunch of AI safety people who:
post frequently
post primarily about AI safety
have reasonably good takes
Some specific people that might be useful:
Neel Nanda (posts about way more than mech interp)
Dylan Hadfield-Menell
David Duvenaud
Stephen Casper
Harlan Stewart (nontechnical)
Rocket Drew (nontechnical)
I also follow several people who signal-boost general AI stuff.
Scaling lab leaders (Jan Leike, Sam A, dario)
Scaling lab engineers (roon, Aidan McLaughlin, Jason Wei)
Huggingface team leads (Philip Schmidt, Sebastian Raschka)
Are there particular sources, eg twitter accounts, that you would recommend following? For other readers (I know Daniel already knows this one), the #papers-running-list channel on the AI Alignment slack is a really ongoing curation of AIS papers.
One source I’ve recently added and recommend is subscribing to individual authors on Semantic Scholar (eg here’s an author page).
Hmm I don’t think there are people I can single out from my following list that have high individual impact. IMO it’s more that the algorithm has picked up on the my trend of engagement and now gives me great discovery.
For someone else to bootstrap this process and give maximum signal to the algorithm, the best thing to do might just be to follow a bunch of AI safety people who:
post frequently
post primarily about AI safety
have reasonably good takes
Some specific people that might be useful:
Neel Nanda (posts about way more than mech interp)
Dylan Hadfield-Menell
David Duvenaud
Stephen Casper
Harlan Stewart (nontechnical)
Rocket Drew (nontechnical)
I also follow several people who signal-boost general AI stuff.
Scaling lab leaders (Jan Leike, Sam A, dario)
Scaling lab engineers (roon, Aidan McLaughlin, Jason Wei)
Huggingface team leads (Philip Schmidt, Sebastian Raschka)
Twitter influencers (Teortaxes, janus, near)