One difference is that keeping AI a tool might be a temporary strategy until you can use the tool AI to solve whatever safety problems apply to non-tool AI. In that case the co-ordination problem isn’t as difficult because you might just need to get the smallish pool of leading actors to co-ordinate for a while, rather than everyone to coordinate indefinitely.
Sane pausing similarly must be temporary, gated by theory and the experiments it endorses. Pausing is easier to pull off than persistently-tool AI, since it’s further from dangerous capabilities, so it’s not nearly as ambiguous when you take steps outside the current regime (such as gradual disempowerment). RSPs for example are the strategy of being extremely precise so that you stop just before the risk of falling off the cliff becomes catastrophic, and not a second earlier.
One difference is that keeping AI a tool might be a temporary strategy until you can use the tool AI to solve whatever safety problems apply to non-tool AI. In that case the co-ordination problem isn’t as difficult because you might just need to get the smallish pool of leading actors to co-ordinate for a while, rather than everyone to coordinate indefinitely.
Sane pausing similarly must be temporary, gated by theory and the experiments it endorses. Pausing is easier to pull off than persistently-tool AI, since it’s further from dangerous capabilities, so it’s not nearly as ambiguous when you take steps outside the current regime (such as gradual disempowerment). RSPs for example are the strategy of being extremely precise so that you stop just before the risk of falling off the cliff becomes catastrophic, and not a second earlier.
Agree that pauses are a clearer line. But even if a pause and tool-limit are both temporary, we should expect the full pause to have to last longer.