I’ve read a lot of your posts in the past and find you to be reliably insightful. As such, I find it really interesting that with such an IQ in the 99th percentile (or higher), you still initially thought you weren’t good enough to do important AI safety work. While I haven’t had my IQ properly tested, I did take the LSAT and got a 160 (80th percentile), which is probably around an IQ of merely 120ish. I remember reading a long time ago that the average self-reported IQ of Less Wrongers was 137, which, combined with the extremely rigorous posting style of most people on here, was quite intellectually intimidating (and still is to an extent).
This makes me wonder if there’s much point in my own efforts to push the needle on AI safety. I’ve interviewed in the past with orgs and not gotten in, and occasionally done some simple experiments with local LLMs and activation vectors (I used to work in industry with word vectors so I know a bunch about that space), but actually getting any sort of position or publishable result seems to be very hard. I’ve had the thought a lot of times that the resources that could go to me would be better off given to a counterfactual more talented researcher/engineer, as it seems like AI safety is more funding constrained than talent constrained.
I’ve read a lot of your posts in the past and find you to be reliably insightful. As such, I find it really interesting that with such an IQ in the 99th percentile (or higher), you still initially thought you weren’t good enough to do important AI safety work. While I haven’t had my IQ properly tested, I did take the LSAT and got a 160 (80th percentile), which is probably around an IQ of merely 120ish. I remember reading a long time ago that the average self-reported IQ of Less Wrongers was 137, which, combined with the extremely rigorous posting style of most people on here, was quite intellectually intimidating (and still is to an extent).
This makes me wonder if there’s much point in my own efforts to push the needle on AI safety. I’ve interviewed in the past with orgs and not gotten in, and occasionally done some simple experiments with local LLMs and activation vectors (I used to work in industry with word vectors so I know a bunch about that space), but actually getting any sort of position or publishable result seems to be very hard. I’ve had the thought a lot of times that the resources that could go to me would be better off given to a counterfactual more talented researcher/engineer, as it seems like AI safety is more funding constrained than talent constrained.