To me, this seems consistent with just maximizing shareholder value.
Salaries and compute are the largest expenses at big AI firms, and “being the good guys” lets you get the best people at significant discounts. To my understanding, one of the greatest early successes of OpenAI was hiring great talent for cheap because they were “the non-profit good guys who cared about safety”. Later, great people like John Schulman left OpenAI for Anthropic because of his “desire to deepen my focus on AI alignment”.
As for people thinking you’re a potential x-risk, the downsides seem mostly solved by “if we didn’t do it somebody less responsible would”. AI safety policy interventions could also give great moats against competition, especially for the leading firm(s). Furthermore, much of the “AI alignment research” they invest in prevents PR disasters (terrorist used ChatGPT to invent dangerous bio-weapon) and most of the “interpretability” they invest in seems pretty close to R&D which they would invest in anyway to improve capabilities.
This might sound overly pessimistic. However, it can be viewed positively: there is significant overlap between the interests of big AI firms and the AI safety community.
To me, this seems consistent with just maximizing shareholder value. … “being the good guys” lets you get the best people at significant discounts.
This is pretty different from my model of what happened with OpenAI or Anthropic—especially the latter, where the founding team left huge equity value on the table by departing (OpenAI’s equity had already appreciated something like 10x between the first MSFT funding round and EOY 2020, when they departed).
And even for Sam and OpenAI, this would seem like a kind of wild strategy for pursuing wealth for someone who already had the network and opportunities he had pre-OpenAI?
With the change to for-profit and Sam receiving equity, it seems like the strategy will pay off. However, this might be hindsight bias, or I might otherwise have a too simplified view.
To me, this seems consistent with just maximizing shareholder value.
Salaries and compute are the largest expenses at big AI firms, and “being the good guys” lets you get the best people at significant discounts. To my understanding, one of the greatest early successes of OpenAI was hiring great talent for cheap because they were “the non-profit good guys who cared about safety”. Later, great people like John Schulman left OpenAI for Anthropic because of his “desire to deepen my focus on AI alignment”.
As for people thinking you’re a potential x-risk, the downsides seem mostly solved by “if we didn’t do it somebody less responsible would”. AI safety policy interventions could also give great moats against competition, especially for the leading firm(s). Furthermore, much of the “AI alignment research” they invest in prevents PR disasters (terrorist used ChatGPT to invent dangerous bio-weapon) and most of the “interpretability” they invest in seems pretty close to R&D which they would invest in anyway to improve capabilities.
This might sound overly pessimistic. However, it can be viewed positively: there is significant overlap between the interests of big AI firms and the AI safety community.
This is pretty different from my model of what happened with OpenAI or Anthropic—especially the latter, where the founding team left huge equity value on the table by departing (OpenAI’s equity had already appreciated something like 10x between the first MSFT funding round and EOY 2020, when they departed).
And even for Sam and OpenAI, this would seem like a kind of wild strategy for pursuing wealth for someone who already had the network and opportunities he had pre-OpenAI?
With the change to for-profit and Sam receiving equity, it seems like the strategy will pay off. However, this might be hindsight bias, or I might otherwise have a too simplified view.