By “x-risk” from AI that you currently disbelieve, do you mean extinction of humanity, disempowerment-or-extinction, or long term loss of utility (normative value)? Something time-scoped, such as “in the next 20 years”?
Even though Bostrom’s “x-risk” is putatively more well-defined than “doom”, in practice it suffers from similar ambiguities, so strong positions such as 98+% doom/x-risk or 2-% doom/x-risk (in this case from AI) become more meaningful if they specify what is being claimed in more detail than just “doom” or “x-risk”.
(Sorry.) Does this mean (1) more specifically eutopia that is not disempowerment (in the mainline scenario, or “by default”, with how things are currently going), (2) that something else likely kills humanity first, so the counterfactual impact of AI x-risk vanishes, or (3) high long term utility (normative value) possibly in some other form?
By “x-risk” from AI that you currently disbelieve, do you mean extinction of humanity, disempowerment-or-extinction, or long term loss of utility (normative value)? Something time-scoped, such as “in the next 20 years”?
Even though Bostrom’s “x-risk” is putatively more well-defined than “doom”, in practice it suffers from similar ambiguities, so strong positions such as 98+% doom/x-risk or 2-% doom/x-risk (in this case from AI) become more meaningful if they specify what is being claimed in more detail than just “doom” or “x-risk”.
I mean basically all the conventionally conceived dangers.
(Sorry.) Does this mean (1) more specifically eutopia that is not disempowerment (in the mainline scenario, or “by default”, with how things are currently going), (2) that something else likely kills humanity first, so the counterfactual impact of AI x-risk vanishes, or (3) high long term utility (normative value) possibly in some other form?