There are two questions which I think are important to distinguish:
Is AI x-risk the top priority for humanity?
Is AI x-risk the top priority of some individual?
The first question is perhaps extremely important in a general sense. However, the second question is, I think, more useful since it provides actionable information to specific people. Of course, the difficulty of answering the second question is that it depends heavily on individual factors, such as
The ethical system of the individual which they are using the evaluate the question.
The specific talents, and time-constraints of the individual.
I also partially object to placing AI x-risk into one entire bundle. There are many ways that people can influence the development of artificial intelligence:
Technical research
Social research to predict and intervene on governance for AI
AI forecasting to help predict which type of AI will end up existing and what their impact will be
Even within technical research, it is generally considered that there are different approaches:
Machine learning research with an emphasis on creating systems that could scale to superhuman capabilities while remaining aligned. This would include, but would not be limited to
Paul Christiano-style research, such as expanding iterated distillation and amplification
ML transparency
ML robustness to distributional shifts
Fundamental mathematical research which could help dissolve confusion about AI capabilities and alignment. This includes
Uncovering insights into decision theory
Discovering the necessary conditions for a system to be value aligned
Examining how systems could be stable upon reflection, such as after self-modification
I agree with the distinction you make and think it’s nice to disentangle them. I’m most interested in the “Is AI x-risk the top priority for humanity?” question. I’m fine with bundling all of the approaches to reducing AI-Xrisk being bundled here, because I’m just asking “is working on it (in *some* way) the highest priority”.
You’re right. I initially put this in the answer category, but I really meant it as clarification. I assumed that the personal question was more important since the humanity question is not very useful (except maybe to governments and large corporations).
There are two questions which I think are important to distinguish:
Is AI x-risk the top priority for humanity?
Is AI x-risk the top priority of some individual?
The first question is perhaps extremely important in a general sense. However, the second question is, I think, more useful since it provides actionable information to specific people. Of course, the difficulty of answering the second question is that it depends heavily on individual factors, such as
The ethical system of the individual which they are using the evaluate the question.
The specific talents, and time-constraints of the individual.
I also partially object to placing AI x-risk into one entire bundle. There are many ways that people can influence the development of artificial intelligence:
Technical research
Social research to predict and intervene on governance for AI
AI forecasting to help predict which type of AI will end up existing and what their impact will be
Even within technical research, it is generally considered that there are different approaches:
Machine learning research with an emphasis on creating systems that could scale to superhuman capabilities while remaining aligned. This would include, but would not be limited to
Paul Christiano-style research, such as expanding iterated distillation and amplification
ML transparency
ML robustness to distributional shifts
Fundamental mathematical research which could help dissolve confusion about AI capabilities and alignment. This includes
Uncovering insights into decision theory
Discovering the necessary conditions for a system to be value aligned
Examining how systems could be stable upon reflection, such as after self-modification
I agree with the distinction you make and think it’s nice to disentangle them. I’m most interested in the “Is AI x-risk the top priority for humanity?” question. I’m fine with bundling all of the approaches to reducing AI-Xrisk being bundled here, because I’m just asking “is working on it (in *some* way) the highest priority”.
You’re right. I initially put this in the answer category, but I really meant it as clarification. I assumed that the personal question was more important since the humanity question is not very useful (except maybe to governments and large corporations).
Well… it’s also pretty useful to individuals, IMO, since it affects what you tell other people, when discussing cause prioritization.