I don’t think that’s actually true at all; Anthropic was explicitly a scaling lab when made, for example, and Deepmind does not seem like it was “an attempt to found an ai safety org”.
It is the case that Anthropic/OAI/Deepmind did feature AI Safety people supporting the org, and the motivation behind the orgs is indeed safety, but the people involved did know that they were also going to build SOTA AI models.
I don’t think that’s actually true at all; Anthropic was explicitly a scaling lab when made, for example, and Deepmind does not seem like it was “an attempt to found an ai safety org”.
It is the case that Anthropic/OAI/Deepmind did feature AI Safety people supporting the org, and the motivation behind the orgs is indeed safety, but the people involved did know that they were also going to build SOTA AI models.