I actually agree with Neel that, in principle, an AI lab could race for AGI while acting responsibly and IMO not violating deontology.
Releasing models exactly at the level of their top competitor, immediately after the competitor’s release and a bit cheaper; talking to the governments and lobbying for regulation; having an actually robust governance structure and not doing a thing that increases the chance of everyone dying.
This doesn’t describe any of the existing labs, though.
I actually agree with Neel that, in principle, an AI lab could race for AGI while acting responsibly and IMO not violating deontology.
Releasing models exactly at the level of their top competitor, immediately after the competitor’s release and a bit cheaper; talking to the governments and lobbying for regulation; having an actually robust governance structure and not doing a thing that increases the chance of everyone dying.
This doesn’t describe any of the existing labs, though.