The risks that AGI implies for Humanity are serious, but they should not be assessed without considering that it is the most promising path out of the age of acute existential risk. Those who support a ban of this technology shall at least propose their own alternative exit strategy.
Building ASI to reduce existential risk is only net-positive if the step risk associated with ASI is less than the state risk of the status quo for several decades which is doubtful.
I am very happy this is now mainstream:
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
The risks that AGI implies for Humanity are serious, but they should not be assessed without considering that it is the most promising path out of the age of acute existential risk. Those who support a ban of this technology shall at least propose their own alternative exit strategy.
https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against
Bostrom is the absolute best.
Building ASI to reduce existential risk is only net-positive if the step risk associated with ASI is less than the state risk of the status quo for several decades which is doubtful.