This seems like it’s probably a misunderstanding. With the exception of basically just MIRI, AI alignment didn’t exist as a field when DeepMind was founded, and I doubt Sam Altman ever actively sought employment at an existing alignment organization before founding OpenAI.
Yeah, in hindsight he probably meant that they got interested in AI because of AI safety ideas, then they decided to go into capabilities research after upskilling. Then again, how are you going to get funding otherwise, charity? It seems that a lot of alignment work, especially the conceptual kind we really need to make progress toward an alignment paradigm, is just a cost for an AI company with no immediate upside. So any AI alignment org would need to pivot to capabilities research if they wanted to scale their alignment efforts.
Keep in mind that “will go on to do capabilities work” isn’t the only -EV outcome; each time you add a person to the field you increase the size of the network, which always has costs and doesn’t always have benefits.
I strongly disagree. The field has a deficit of ideas and needs way more people. Of course inefficiencies will increase, but I can’t think of any other field that progressed faster explicitly because members made an effort to limit recruitment. Note that even very inefficient fields like medicine make faster progress when more people are added to the network—it would be very hard to argue, for example, that a counterfactual world where no one in China did medical research would have made more progress. My personal hope is 1 million people working on technical alignment which implies $100 billion+ annual funding. 10x that would be better, but I don’t think it’s realistic.
Yeah, in hindsight he probably meant that they got interested in AI because of AI safety ideas, then they decided to go into capabilities research after upskilling. Then again, how are you going to get funding otherwise, charity? It seems that a lot of alignment work, especially the conceptual kind we really need to make progress toward an alignment paradigm, is just a cost for an AI company with no immediate upside. So any AI alignment org would need to pivot to capabilities research if they wanted to scale their alignment efforts.
I strongly disagree. The field has a deficit of ideas and needs way more people. Of course inefficiencies will increase, but I can’t think of any other field that progressed faster explicitly because members made an effort to limit recruitment. Note that even very inefficient fields like medicine make faster progress when more people are added to the network—it would be very hard to argue, for example, that a counterfactual world where no one in China did medical research would have made more progress. My personal hope is 1 million people working on technical alignment which implies $100 billion+ annual funding. 10x that would be better, but I don’t think it’s realistic.