In the next few decadues, the odds of ASI killing/disempowering us is tiny
I found this point surprising. Is this because of long timelines to ASI?
Regardless, while it seems very hard to implement well, I’m happy to publicly say that I am in favour of a well-implemented preemptive ban on dangerous ASI
I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.
I found this point surprising. Is this because of long timelines to ASI?
Regardless, while it seems very hard to implement well, I’m happy to publicly say that I am in favour of a well-implemented preemptive ban on dangerous ASI
Yes, mostly.
I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.