I am interested in working on AI alignment but doubt I’m clever enough to make any meaningful contribution, so how hard is it to be able to work on AI alignment? I’m currently a high school student, so I could basically plan my whole life so that I end up a researcher or software engineer or something else. Alignment being very difficult, and very intelligent people already working on it, it seems like I would have to almost be some kind of math/computer/ML genius to help at all. I’m definitely above average, my IQ is like 121 (I know the limitations of IQ as a measurement and that it’s not that important) in school I’m pretty good at maths and other sciences but not even the best out of my class of 25. So my question is basically how clever does one have to be to be able to contribute to AGI alignment?
I don’t know, I’m replying here with my priors from software development.
TL;DR:
Do something that is
Mostly useful (software/ML/math/whatever are all great and there are others too, feel free to ask)
Where you have a good fit, so you’ll enjoy and be curious about your work, and not burn out from frustration or because someone told you “you must take this specific job”
Get mentorship so that you’ll learn quickly
And this will almost certainly be useful somehow.
Main things my prior is based on:
EA in general and AI Alignment specifically need lots of different “professions”. We probably don’t want everyone picking the number one profession and nobody doing anything else. We probably want each person doing whatever they’re a good fit for.
The amount we “need” is going up over time, not down, and I can imagine it going up much more, but can’t really imagine it going down (so in other words, I mostly assume whatever we need today, which is quite a lot, will also be needed in a few years. So there will be lots of good options to pick)
consider scaling up in ML to become an ML engineer or ML researcher. If it’s still possible, try to join the best engineering school in your region, and then join your local EA group, and start community building to nudge your smart friends towards AI safety. ML engineering does not necessitate a genius level of IQ.
I’m myself an ML engineer, you can dm me for further questions. I’m far from being a genius, I’ve never been the best in my class, but I’m currently able to contribute meaningfully.
Do you, or other people, know why your comment is getting downvoted? Right now it’s at −5 so I have to assume the general LW audience disagrees with your advice.
Presumably people think it is really hard to become a ML researcher? Or do they think we already have enough people in ML so we don’t need more?
I am interested in working on AI alignment but doubt I’m clever enough to make any meaningful contribution, so how hard is it to be able to work on AI alignment? I’m currently a high school student, so I could basically plan my whole life so that I end up a researcher or software engineer or something else. Alignment being very difficult, and very intelligent people already working on it, it seems like I would have to almost be some kind of math/computer/ML genius to help at all. I’m definitely above average, my IQ is like 121 (I know the limitations of IQ as a measurement and that it’s not that important) in school I’m pretty good at maths and other sciences but not even the best out of my class of 25. So my question is basically how clever does one have to be to be able to contribute to AGI alignment?
I don’t know, I’m replying here with my priors from software development.
TL;DR:
Do something that is
Mostly useful (software/ML/math/whatever are all great and there are others too, feel free to ask)
Where you have a good fit, so you’ll enjoy and be curious about your work, and not burn out from frustration or because someone told you “you must take this specific job”
Get mentorship so that you’ll learn quickly
And this will almost certainly be useful somehow.
Main things my prior is based on:
EA in general and AI Alignment specifically need lots of different “professions”. We probably don’t want everyone picking the number one profession and nobody doing anything else. We probably want each person doing whatever they’re a good fit for.
The amount we “need” is going up over time, not down, and I can imagine it going up much more, but can’t really imagine it going down (so in other words, I mostly assume whatever we need today, which is quite a lot, will also be needed in a few years. So there will be lots of good options to pick)
Hi Tiuto,
consider scaling up in ML to become an ML engineer or ML researcher. If it’s still possible, try to join the best engineering school in your region, and then join your local EA group, and start community building to nudge your smart friends towards AI safety. ML engineering does not necessitate a genius level of IQ.
I’m myself an ML engineer, you can dm me for further questions. I’m far from being a genius, I’ve never been the best in my class, but I’m currently able to contribute meaningfully.
Hi, thanks for the advice.
Do you, or other people, know why your comment is getting downvoted? Right now it’s at −5 so I have to assume the general LW audience disagrees with your advice. Presumably people think it is really hard to become a ML researcher? Or do they think we already have enough people in ML so we don’t need more?