Thus, a bioweapon is actually quite unlikely to lead to a clean annihilation of the human population in the way that the AI 2027 scenario describes. Now, the results of everything I describe will certainly be far from a clean victory for the humans as well. No matter what we do (except perhaps the “upload our minds into robots” option), a full-on AI bio-war would still be extremely dangerous. However, there is value in meeting a bar much lower than clean victory for humans:
I do sorta feel in my gut that whatever really happens will be a lot less… clean… than what AI 2027 describes. History is usually messy and chaotic. And we were under pressure to keep things simple and wordcount low.
So yeah, I could imagine something more like a messy war than a clean bioweapon decapitation strike. Still seems like the situation is pretty grim for humanity, if it gets to the point where the world’s first ASI is trusted by the corporation that built it, by the US government, and is in fact deceptive/misaligned. Seems like at that point they are the superior player AND they have the better hand of cards.
Whenever I see discussions about the actual mechanisms by which ASI might actually act against humanity, it seems like a proxy argument for/against the actual position “ASI will/won’t be that much smarter than humans.”
I do sorta feel in my gut that whatever really happens will be a lot less… clean… than what AI 2027 describes. History is usually messy and chaotic. And we were under pressure to keep things simple and wordcount low.
So yeah, I could imagine something more like a messy war than a clean bioweapon decapitation strike. Still seems like the situation is pretty grim for humanity, if it gets to the point where the world’s first ASI is trusted by the corporation that built it, by the US government, and is in fact deceptive/misaligned. Seems like at that point they are the superior player AND they have the better hand of cards.
Whenever I see discussions about the actual mechanisms by which ASI might actually act against humanity, it seems like a proxy argument for/against the actual position “ASI will/won’t be that much smarter than humans.”
Can it be complex without being messy?