My own take is I do endorse a version of the “pausing now is too late objection”, more specifically I think that for most purposes, we should assume pauses are too late to be effective when thinking about technical alignment, and a big portion of the reason is that I don’t think we will be able to convince many people that AI is powerful enough to need governance without them first hand seeing massive job losses, and at that point we are well past the point of no return for when we could control AI as a species.
In particular, I think Eliezer is probably vindicated/made a correct prediction around how people would react to AI in there’s no fire alarm for AGI (more accurately, the fire alarm will go off way too late to serve as a fire alarm.)
My own take is I do endorse a version of the “pausing now is too late objection”, more specifically I think that for most purposes, we should assume pauses are too late to be effective when thinking about technical alignment, and a big portion of the reason is that I don’t think we will be able to convince many people that AI is powerful enough to need governance without them first hand seeing massive job losses, and at that point we are well past the point of no return for when we could control AI as a species.
In particular, I think Eliezer is probably vindicated/made a correct prediction around how people would react to AI in there’s no fire alarm for AGI (more accurately, the fire alarm will go off way too late to serve as a fire alarm.)
More here:
https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence