(Sorry.) Does this mean (1) more specifically eutopia that is not disempowerment (in the mainline scenario, or “by default”, with how things are currently going), (2) that something else likely kills humanity first, so the counterfactual impact of AI x-risk vanishes, or (3) high long term utility (normative value) possibly in some other form?
I mean basically all the conventionally conceived dangers.
(Sorry.) Does this mean (1) more specifically eutopia that is not disempowerment (in the mainline scenario, or “by default”, with how things are currently going), (2) that something else likely kills humanity first, so the counterfactual impact of AI x-risk vanishes, or (3) high long term utility (normative value) possibly in some other form?