I agree with the distinction you make and think it’s nice to disentangle them. I’m most interested in the “Is AI x-risk the top priority for humanity?” question. I’m fine with bundling all of the approaches to reducing AI-Xrisk being bundled here, because I’m just asking “is working on it (in *some* way) the highest priority”.
You’re right. I initially put this in the answer category, but I really meant it as clarification. I assumed that the personal question was more important since the humanity question is not very useful (except maybe to governments and large corporations).
I agree with the distinction you make and think it’s nice to disentangle them. I’m most interested in the “Is AI x-risk the top priority for humanity?” question. I’m fine with bundling all of the approaches to reducing AI-Xrisk being bundled here, because I’m just asking “is working on it (in *some* way) the highest priority”.
You’re right. I initially put this in the answer category, but I really meant it as clarification. I assumed that the personal question was more important since the humanity question is not very useful (except maybe to governments and large corporations).
Well… it’s also pretty useful to individuals, IMO, since it affects what you tell other people, when discussing cause prioritization.