I’m still quite confused about why you believe that a long-term pause is viable given the potential for actors to take unilateral action and the difficulties in verifying compliance.
Another possibility that could be included in that diagram would be the possibility of merging various national/coalitional AIs.
The viability of a pause is dependent on a bunch of things, like the number of actors who could take some dangerous action, how hard it would be for them to do that, how detectable it would be, etc. These are variable factors. For example, if the world got rid of advanced AI chips completely, dangerous AI activities would then take a long time and be super detectable. We talk about this in the research agenda; there are various ways to extend “breakout time”, and these methods could be important to long-term stability.
I’m still quite confused about why you believe that a long-term pause is viable given the potential for actors to take unilateral action and the difficulties in verifying compliance.
Another possibility that could be included in that diagram would be the possibility of merging various national/coalitional AIs.
The viability of a pause is dependent on a bunch of things, like the number of actors who could take some dangerous action, how hard it would be for them to do that, how detectable it would be, etc. These are variable factors. For example, if the world got rid of advanced AI chips completely, dangerous AI activities would then take a long time and be super detectable. We talk about this in the research agenda; there are various ways to extend “breakout time”, and these methods could be important to long-term stability.