One thing I notice in your post is two possibly different senses of pausing AI:
Toward the end, you write:
Perhaps we should pause widespread rollout of Generative AI in safety-critical domains — unless and until it can be relied on to follow rules with significant greater reliability.
My sense is that often when folks are suggesting a pause of AI, they mean pausing the frontier of AI development (that is, not continuing to develop more capable systems). But I don’t usually understand that as suggesting we stop the rollout of current systems, which I think is more what you’re describing here?
Welcome, Gary! Glad to have you posting here
One thing I notice in your post is two possibly different senses of pausing AI:
Toward the end, you write:
My sense is that often when folks are suggesting a pause of AI, they mean pausing the frontier of AI development (that is, not continuing to develop more capable systems). But I don’t usually understand that as suggesting we stop the rollout of current systems, which I think is more what you’re describing here?