My point is that the 10-30x AIs might be able to be more effective at coordination around AI risk than humans alone, in particular more effective than currently seems feasible in the relevant timeframe (when not taking into account the use of those 10-30x AIs). Saying “labs” doesn’t make this distinction explicit.
Yep, this is what I meant by “labs can increase US willingness to pay for nonproliferation.” Edited to clarify.
My point is that the 10-30x AIs might be able to be more effective at coordination around AI risk than humans alone, in particular more effective than currently seems feasible in the relevant timeframe (when not taking into account the use of those 10-30x AIs). Saying “labs” doesn’t make this distinction explicit.