I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would’ve been better for me if AI safety resources explicitly said things like “if you’re less than Y, don’t even try”, although I’m not sure if I would’ve believed them. Now, I’m glad that I’m not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don’t want that to happen and I don’t want the AI Alignment community to stop saying “You should stay if and only if you’re better than Y”.
Actually, I wish there were more selfish-oriented resources for AI Alignment. Like, with normal universities and jobs, people analyze how to get into them, have a fulfilling career, earn good money, not burn out, etc. As a result, people can read this and properly analyze if it makes sense for them to try to get into jobs or universities for their own food. But with a career in AI safety, this is not the case. All the resources look out not only for the reader, but also for the whole EA project. I think this can easily burn people.
In 2017, I remember reading 80K and thinking I was obviously unqualified for AI alignment work. I am glad that I did not heed that first impression. The best way to test goodness-of-fit is to try thinking about alignment and see if you’re any good at it.
That said, I apparently am the only person of whom [community-respected friend of mine] initially had an unfavorable impression, which later became strongly positive.
Sorry to hear that you didn’t make it as an AI Safety researcher, but thank you for trying.
You shouldn’t feel any pressure, but have you considered trying to be involved in another way such as a) helping to train people trying to break into the field b) providing feedback on people’s alignment proposals c) assisting in outreach (this one is more dependent on personal fit and is easier to do net harm)?
I think it’s a shame how training up in AI Safety is often seen as an all-or-nothing bet, when many people have something valuable to contribute even if that’s not through direct research.
Philip, but were the obstacles that made you stop technical (such as, after your funding ran out, you tried to get new funding or a job in alignment, but couldn’t) or psychological (such as, you felt worried that you are not good enough)?
I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would’ve been better for me if AI safety resources explicitly said things like “if you’re less than Y, don’t even try”, although I’m not sure if I would’ve believed them. Now, I’m glad that I’m not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don’t want that to happen and I don’t want the AI Alignment community to stop saying “You should stay if and only if you’re better than Y”.
Actually, I wish there were more selfish-oriented resources for AI Alignment. Like, with normal universities and jobs, people analyze how to get into them, have a fulfilling career, earn good money, not burn out, etc. As a result, people can read this and properly analyze if it makes sense for them to try to get into jobs or universities for their own food. But with a career in AI safety, this is not the case. All the resources look out not only for the reader, but also for the whole EA project. I think this can easily burn people.
In 2017, I remember reading 80K and thinking I was obviously unqualified for AI alignment work. I am glad that I did not heed that first impression. The best way to test goodness-of-fit is to try thinking about alignment and see if you’re any good at it.
That said, I apparently am the only person of whom [community-respected friend of mine] initially had an unfavorable impression, which later became strongly positive.
Sorry to hear that you didn’t make it as an AI Safety researcher, but thank you for trying.
You shouldn’t feel any pressure, but have you considered trying to be involved in another way such as a) helping to train people trying to break into the field b) providing feedback on people’s alignment proposals c) assisting in outreach (this one is more dependent on personal fit and is easier to do net harm)?
I think it’s a shame how training up in AI Safety is often seen as an all-or-nothing bet, when many people have something valuable to contribute even if that’s not through direct research.
Philip, but were the obstacles that made you stop technical (such as, after your funding ran out, you tried to get new funding or a job in alignment, but couldn’t) or psychological (such as, you felt worried that you are not good enough)?
Oh man.
Yeah, this really sucks. I still think having more people try and fail is preferable to telling people to not try.