That’s a cool way to frame damage risks, but I think your distribution for AI damage is for ASI, not AGI. I think it’s very reasonable that an AGI-based system may cause the type of damage that I am talking about.
Even if you believe that as soon as we achieve AGI, we’ll accelerate to ASI because AGI by definition is self-improving, it still takes time to train a model, and research is slow. I hope that the window b/w AGI & ASI is large enough for such a “Hiroshima event” to occur, so humanity wakes up to the risks of maligned AI systems.
PS: Sorry for the late response, I was offline for a couple of days
No need to say sorry for that! On a forum, there is no expectation to receive a reply. If every reply obligated the recipient to make another reply, comment chains will drag on forever.
You can freely wait a year before replying.
I’m worried that once a “Hiroshima event” occurs, humanity won’t have another chance. If the damage is caused by the AGI/ASI taking over places, then the more power it obtains, the more it can obtain even more power, so it won’t stop at any scale.
If the damage is caused by bad actors using an AGI to invent a very deadly technology, there is a decent chance humanity can survive, but it’s very uncertain. A technology can never be uninvented, and more and more people will know about it.
That’s a cool way to frame damage risks, but I think your distribution for AI damage is for ASI, not AGI. I think it’s very reasonable that an AGI-based system may cause the type of damage that I am talking about.
Even if you believe that as soon as we achieve AGI, we’ll accelerate to ASI because AGI by definition is self-improving, it still takes time to train a model, and research is slow. I hope that the window b/w AGI & ASI is large enough for such a “Hiroshima event” to occur, so humanity wakes up to the risks of maligned AI systems.
PS: Sorry for the late response, I was offline for a couple of days
No need to say sorry for that! On a forum, there is no expectation to receive a reply. If every reply obligated the recipient to make another reply, comment chains will drag on forever.
You can freely wait a year before replying.
I’m worried that once a “Hiroshima event” occurs, humanity won’t have another chance. If the damage is caused by the AGI/ASI taking over places, then the more power it obtains, the more it can obtain even more power, so it won’t stop at any scale.
If the damage is caused by bad actors using an AGI to invent a very deadly technology, there is a decent chance humanity can survive, but it’s very uncertain. A technology can never be uninvented, and more and more people will know about it.
Or more! (I was delighted to receive this reply.)