AGI != ASI. Defining AGI, and only post ante-fining a company into oblivion that makes AGI may be enough to prevent the death of humanity. I would put pretty good odds on it being enough, as long as it was strongly enforced and detected.
I would still support regulation preventing AGI, I just want the terminology to be straight. ASI is the thing that IABIED.
So, let’s start with defining AGI XD. Also, you can find many examples in Lesswrong of people talking about AI that leads to human extinction as “AGI/ASI”. Using the ”/”. I do not know to what extent there is a firm distinction on everyone’s minds when thinking about the type of AI they want to ban.
Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.
I like most of this post, but:
AGI != ASI. Defining AGI, and only post ante-fining a company into oblivion that makes AGI may be enough to prevent the death of humanity. I would put pretty good odds on it being enough, as long as it was strongly enforced and detected.
I would still support regulation preventing AGI, I just want the terminology to be straight. ASI is the thing that IABIED.
So, let’s start with defining AGI XD. Also, you can find many examples in Lesswrong of people talking about AI that leads to human extinction as “AGI/ASI”. Using the ”/”. I do not know to what extent there is a firm distinction on everyone’s minds when thinking about the type of AI they want to ban.
Artificial General Intelligence, AGI, is an AI, that can do anything an individual human can do, (especially on economically productive metrics).
Artificial Superintelligence, ASI, is an AI, that can do much more than all of humanity working together.
These definitions wouldn’t be suitable for a legal purpose, I imagine, in that they lack a “technical” precision. However, in my mind, there is a very big difference between the two, and an observer wouldn’t be likely to mislabel a system as ASI when it is actually AGI, or vice versa.
Yet, in my mind, one of the biggest risks of AGI is that it is used to build ASI, which is why I still agree with your post.