It’s possible that we won’t get something that deserves the name ASI or TAI until, for example, 2030. And a lot can change in more than 5 years!
The current panic seems excessive. We do not live in a world where all reasonable people expect the emergence of artificial superintelligence in the next few years and the extinction of humanity soon after that. The situation is very worrying, and this is the most likely cause of death for all of us in the coming years, yes. But I don’t understand how anyone can be so sure of a bad outcome as to consider people’s survival a miracle.
It’s possible that we won’t get something that deserves the name ASI or TAI until, for example, 2030.
And a lot can change in more than 5 years!
The current panic seems excessive. We do not live in a world where all reasonable people expect the emergence of artificial superintelligence in the next few years and the extinction of humanity soon after that.
The situation is very worrying, and this is the most likely cause of death for all of us in the coming years, yes. But I don’t understand how anyone can be so sure of a bad outcome as to consider people’s survival a miracle.