I agree with this, and I would guess that a vast majority of people speaking in favor of a coordinated ban/pause/slowdown would agree with this[1].
An important aspect of the issue is that in the state of great uncertainty, where the actual “bright lines”/”phase transitions”/”points of no return” lie in reality, we need to take a “pragmatically conservative” stance and start with a definition of “not obviously X-risk-generating AI” that is “overly broad from God’s point of view”, but makes sense given our limited knowledge. Starting then, as our uncertainty gradually resolves and our understanding grows, we will be able to gradually trim/refine it, so that it becomes a better pointer at “X-risk-generating AI”.
And then there is the issue of time. Generally and most likely, the longer we wait for a “more precisely accurate” definition, the closer we get to the line (/some of the lines), the less time we’ll have to implement it and the more difficult it will be to implement it (e.g. because pro-all-AI-progress lobby may grow in power over time, AI parasitism or whatever might get society addicted before the ~absolute point of no return, etc).
Obviously, there are tradeoffs here, e.g., the society (or some specific groups) might get annoyed by AI labs doing various nasty things. More importantly, more precise regulations, using definitions grounded in more legible and more mature knowledge, are generally more likely to pass. But I still think that (generally) the earlier the better, certainly in such “minimal” areas or types of endeavors as awareness raising and capacity building.
Thank you for the posts you’ve linked here! I agree that we can’t wait for the perfect definition or for consensus in the AI Safety community on definitions, to actually take action. Precisely, I wanted this post to help others think more clearly about the issue!
I agree with this, and I would guess that a vast majority of people speaking in favor of a coordinated ban/pause/slowdown would agree with this[1].
An important aspect of the issue is that in the state of great uncertainty, where the actual “bright lines”/”phase transitions”/”points of no return” lie in reality, we need to take a “pragmatically conservative” stance and start with a definition of “not obviously X-risk-generating AI” that is “overly broad from God’s point of view”, but makes sense given our limited knowledge. Starting then, as our uncertainty gradually resolves and our understanding grows, we will be able to gradually trim/refine it, so that it becomes a better pointer at “X-risk-generating AI”.
And then there is the issue of time. Generally and most likely, the longer we wait for a “more precisely accurate” definition, the closer we get to the line (/some of the lines), the less time we’ll have to implement it and the more difficult it will be to implement it (e.g. because pro-all-AI-progress lobby may grow in power over time, AI parasitism or whatever might get society addicted before the ~absolute point of no return, etc).
Obviously, there are tradeoffs here, e.g., the society (or some specific groups) might get annoyed by AI labs doing various nasty things. More importantly, more precise regulations, using definitions grounded in more legible and more mature knowledge, are generally more likely to pass. But I still think that (generally) the earlier the better, certainly in such “minimal” areas or types of endeavors as awareness raising and capacity building.
… at least when pressed towards thinking about the issue clearly.
Thank you for the posts you’ve linked here! I agree that we can’t wait for the perfect definition or for consensus in the AI Safety community on definitions, to actually take action. Precisely, I wanted this post to help others think more clearly about the issue!