What is the “great personal cost” to shifting from AI capabilities to safety? Sure, quitting one’s frontier lab job to become an independent researcher means taking a pay cut, but that’s an opportunity cost and not really an enormous sacrifice. It’s not like any frontier labs would try and claw back your equity … again.
I’ve seen somewhere that (some) people at AI labs are thinking in terms of shares of the future lightcone, not just money.
If most of your friends are capabilities researchers who aren’t convinced that they’re work is negative EV yet, it might be pretty awkward when they ask why you’ve switched to safety.
There’s a big prestige drop (in many people’s minds, such as one’s parents’) from being at a place like OpenAI (perceived by many as a group made up of the best of the best) to being an independent researcher. (“What kind of a job is that?!”)
Having to let go of sunken costs (knowledge/skills for capabilities research) and invest in a bunch of new human capital needed for safety research.
What is the “great personal cost” to shifting from AI capabilities to safety? Sure, quitting one’s frontier lab job to become an independent researcher means taking a pay cut, but that’s an opportunity cost and not really an enormous sacrifice. It’s not like any frontier labs would try and claw back your equity … again.
I’ve seen somewhere that (some) people at AI labs are thinking in terms of shares of the future lightcone, not just money.
If most of your friends are capabilities researchers who aren’t convinced that they’re work is negative EV yet, it might be pretty awkward when they ask why you’ve switched to safety.
There’s a big prestige drop (in many people’s minds, such as one’s parents’) from being at a place like OpenAI (perceived by many as a group made up of the best of the best) to being an independent researcher. (“What kind of a job is that?!”)
Having to let go of sunken costs (knowledge/skills for capabilities research) and invest in a bunch of new human capital needed for safety research.