Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.
Regarding “AI optimists,” I had not yet seen the paper currently on arxiv, but “AI risk skeptics” is indeed far more precise than “AI optimists.” 100 percent agreed.
Regarding alternatives to “AI pessimists” or “doomers,” Nevin Freeman’s term “AI prepper” is definitely an improvement. I guess I have a slight preference for “strategist,” like I used above, over “prepper,” but I’m probably biased out of habit. “Risk mitigation advocate” or “risk mitigator” would also work but they are more unwieldy than a single term.
The “Taxonomy on AI-Risk Counterarguments” post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well.
As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It’s hard for people to think of concepts for which they have no words.)
I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter.
Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.
Thank you for your thoughtful and useful comment.
Regarding “AI optimists,” I had not yet seen the paper currently on arxiv, but “AI risk skeptics” is indeed far more precise than “AI optimists.” 100 percent agreed.
Regarding alternatives to “AI pessimists” or “doomers,” Nevin Freeman’s term “AI prepper” is definitely an improvement. I guess I have a slight preference for “strategist,” like I used above, over “prepper,” but I’m probably biased out of habit. “Risk mitigation advocate” or “risk mitigator” would also work but they are more unwieldy than a single term.
The “Taxonomy on AI-Risk Counterarguments” post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well.
As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It’s hard for people to think of concepts for which they have no words.)
I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter.
Thanks again.