Have you thought about engineers at frontier labs, FAANG, or other AI-intensive companies (Perplexity, Hume.ai, etc.)?
At OpenAI or Anthropic, engineers are too high agency to say you’ll work on products or operations and not try to further the company’s central mission.
FAANG is harder to answer as they have outsized roles in tech career paths and sprawling businesses. At FAANG frontier, you can say that more plausibly, but I’m not sure you’d want to navigate career growth while avoiding helping SI research.
At anything else, there’s very diminished returns in withholding your career from all AI engineering, and you can plausibly say you care about the mission of “helping developers code” or “helping businesses use empathetic sounding voice bots” without the mission being “create SI”.
I do think there’s value in a very simple, tractable, broadly applicable guidance. “STEM should avoid non-FAANG frontier labs or FAANG AI divisions” sounds pretty good.
Note I’m not informed enough to agree/disagree strongly with the article; the above is extending the article’s conclusion to all STEM jobs.
I chiefly advise against work that brings us closer to superintelligence. I aim this advice primarily at those who want to make sure AI goes well. For careers that do other things, and for those who aren’t aiming their careers for impact, this post mostly doesn’t apply. One can argue about secondary effects and such, but in general, mundane utility is a good thing and it’s fine for people to get paid for providing it.
Have you thought about engineers at frontier labs, FAANG, or other AI-intensive companies (Perplexity, Hume.ai, etc.)?
At OpenAI or Anthropic, engineers are too high agency to say you’ll work on products or operations and not try to further the company’s central mission.
FAANG is harder to answer as they have outsized roles in tech career paths and sprawling businesses. At FAANG frontier, you can say that more plausibly, but I’m not sure you’d want to navigate career growth while avoiding helping SI research.
At anything else, there’s very diminished returns in withholding your career from all AI engineering, and you can plausibly say you care about the mission of “helping developers code” or “helping businesses use empathetic sounding voice bots” without the mission being “create SI”.
I do think there’s value in a very simple, tractable, broadly applicable guidance. “STEM should avoid non-FAANG frontier labs or FAANG AI divisions” sounds pretty good.
Note I’m not informed enough to agree/disagree strongly with the article; the above is extending the article’s conclusion to all STEM jobs.
I chiefly advise against work that brings us closer to superintelligence. I aim this advice primarily at those who want to make sure AI goes well. For careers that do other things, and for those who aren’t aiming their careers for impact, this post mostly doesn’t apply. One can argue about secondary effects and such, but in general, mundane utility is a good thing and it’s fine for people to get paid for providing it.