I think part of the difficulty is it’s not easy to imagine or predict what happens in “future going really well without AI takeover”. Assuming AI will still exist and make progress, humans would probably have to change drastically (in lifestyle if not body/mind) to stay relevant, and it’d be hard to predict what that would be like and whether specific changes are a good idea, unless you don’t think things going really well requires human relevance.
Edit: in contrast, as others said, avoiding AI takeover is a clearer goal and has clearer paths and endpoints. “Future” going well is a potentially indefinitely long time, hard to quantify or coordinate over or even have a consensus on what is even desirable.
I think part of the difficulty is it’s not easy to imagine or predict what happens in “future going really well without AI takeover”. Assuming AI will still exist and make progress, humans would probably have to change drastically (in lifestyle if not body/mind) to stay relevant, and it’d be hard to predict what that would be like and whether specific changes are a good idea, unless you don’t think things going really well requires human relevance.
Edit: in contrast, as others said, avoiding AI takeover is a clearer goal and has clearer paths and endpoints. “Future” going well is a potentially indefinitely long time, hard to quantify or coordinate over or even have a consensus on what is even desirable.