Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.
Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.