I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don’t think it’s very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don’t agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.
I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don’t think it’s very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don’t agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.