I think I would feel better about this if control advocates were clear that their strategy is two-pronged, and included somehow getting a pause on ASI development of some kind. Then they would at least be actively trying to make what I would consider one of the most key conditionals for control substantially reducing doom hold.
Idk, I think we’re pretty clear that we aren’t advocating “do control research and don’t have any other plans or take any other actions”. For example, in the Implications and proposed actions section of “The case for ensuring that powerful AIs are controlled”, we say a variety of things about how control fits into our overall recommendations for mitigating AI risk, most relevantly:
We expect it to be impossible to ensure safety without relying on alignment (or some other non-black-box strategy) when working with wildly superintelligent AIs because black-box control approaches will depend on a sufficiently small gap in capabilities between more trusted labor, both humans and trusted weaker AIs, and our most capable AIs. For obvious reasons, we believe that labs should be very wary of building wildly superintelligent AIs that could easily destroy humanity if they tried to, and we think they should hold off from doing so as long as a weaker (but still transformatively useful) AI could suffice to solve critical problems. For many important uses of AI, wildly superintelligent AI likely isn’t necessary, so we might be able to hold off on building wildly superintelligent AI for a long time. We think that building transformatively useful AI while maintaining control seems much safer and more defensible than building wildly superintelligent AI.
We think the public should consider asking labs to ensure control and avoid building wildly superintelligent AI. In the slightly longer term, we think that requirements for effective control could be a plausible target of regulation or international agreement.
Idk, I think we’re pretty clear that we aren’t advocating “do control research and don’t have any other plans or take any other actions”. For example, in the Implications and proposed actions section of “The case for ensuring that powerful AIs are controlled”, we say a variety of things about how control fits into our overall recommendations for mitigating AI risk, most relevantly: