Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.
Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.