So, let’s start with defining AGI XD. Also, you can find many examples in Lesswrong of people talking about AI that leads to human extinction as “AGI/ASI”. Using the ”/”. I do not know to what extent there is a firm distinction on everyone’s minds when thinking about the type of AI they want to ban.
Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.
So, let’s start with defining AGI XD. Also, you can find many examples in Lesswrong of people talking about AI that leads to human extinction as “AGI/ASI”. Using the ”/”. I do not know to what extent there is a firm distinction on everyone’s minds when thinking about the type of AI they want to ban.
Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.