Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.
I’m never a big fan of this sort of… cognitive rewiring? Juggling definitions? This post reinforces my bias, since it’s written from a point of very stong bias itself.
AI optimists think AI will go well and be helpful.
AI pessimists think AI will go poorly and be harmful.
It’s not that deep.
The post itself is bordering on insulting anyone who has a different opinion than the author (who, no doubt, would prefer the label “AI strategist” than “AI extremists”). I was thinking about going into the details of why, but honestly… this is unlikely to be productive discourse coming from a place where the “other side” is immediately compared to nationalists (?!) or extremists (?!!!).
I’m an AI optimist. I think AI will go well and will help humanity flourish, through both capabilities and alignment research. I think things will work out. That’s all.
Agreed, the terms aren’t clear enough. I could be called an “AI optimist”, insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also “AI optimists”, because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone’s views.
This paper uses the term “AI risk skeptics”, which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they’re somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for “doomer”, but the conclusion of “AI prepper” doesn’t seem great to me.
I’m never a big fan of this sort of… cognitive rewiring? Juggling definitions? This post reinforces my bias, since it’s written from a point of very stong bias itself.
AI optimists think AI will go well and be helpful.
AI pessimists think AI will go poorly and be harmful.
It’s not that deep.
The post itself is bordering on insulting anyone who has a different opinion than the author (who, no doubt, would prefer the label “AI strategist” than “AI extremists”). I was thinking about going into the details of why, but honestly… this is unlikely to be productive discourse coming from a place where the “other side” is immediately compared to nationalists (?!) or extremists (?!!!).
I’m an AI optimist. I think AI will go well and will help humanity flourish, through both capabilities and alignment research. I think things will work out. That’s all.