The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment.
I think you need both? That is—I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don’t accidentally (or deliberately) create rogue AIs that cause lots of problems.
I think historically many people imagined “we’ll make a generally intelligent system and ask it to figure out a way to defend the Earth” in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense.
I think you need both? That is—I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don’t accidentally (or deliberately) create rogue AIs that cause lots of problems.
I think historically many people imagined “we’ll make a generally intelligent system and ask it to figure out a way to defend the Earth” in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense.