But other labs are even less safe, and not far behind.
Yes, largely alignment is an unsolved problem on which progress is an exogenous function of time. But to a large extent we’re safer with safety-interested labs developing powerful AI: this will boost model-independent alignment research, make particular critical models more likely to be aligned/controlled, help generate legible evidence that alignment is hard (insofar as that exists), and maybe enable progress to pause at a critical moment.
I think that to the extent that other labs are “not far behind” (such as FAIR), this is substantially an artifact of them being caught up in a competitive arms race. Catching up to “nearly SOTA” is usually much easier than “advancing SOTA”, and I’m fairly persuaded by the argument that the top 3 labs are indeed ideologically motivated in ways that most other labs aren’t, and there would be much less progress in dangerous directions if they all shut down because their employees all quit.
But other labs are even less safe, and not far behind.
Yes, largely alignment is an unsolved problem on which progress is an exogenous function of time. But to a large extent we’re safer with safety-interested labs developing powerful AI: this will boost model-independent alignment research, make particular critical models more likely to be aligned/controlled, help generate legible evidence that alignment is hard (insofar as that exists), and maybe enable progress to pause at a critical moment.
I think that to the extent that other labs are “not far behind” (such as FAIR), this is substantially an artifact of them being caught up in a competitive arms race. Catching up to “nearly SOTA” is usually much easier than “advancing SOTA”, and I’m fairly persuaded by the argument that the top 3 labs are indeed ideologically motivated in ways that most other labs aren’t, and there would be much less progress in dangerous directions if they all shut down because their employees all quit.
And they would start developing web apps at Microsoft or go to XAI, inflection, and Chinese labs?
Am I in crazy town? Did we not see what happened when there was an attempt to slightly modify OpenAI, let alone shut it down.