These considerations and more have led some of the most cited AI researchers of all time—such as Yoshua Bengio and Geoffrey Hinton—to voice that it is, at the very least, “not inconceivable” that AI ends up “wiping out humanity”.
So I actually think that the risk is more than 50% (of the existential threat), but I don’t say that because there are other people who think it’s less. And I think the sort of plausible thing that takes into account the opinions of everybody I know is sort of 10 to 20%. We certainly have a good chance of surviving it, but we better think very hard of how to do that.
In 2024, Hinton expected more than 50% chance of the existential threat (at 38:07 in the video), h/t TsviBT:
I wasn’t aware of this video, thanks!