Yeah, I think so. But since those people generally find AI less important (there’s both less of an upside and less of a downside) they generally participate less in the debate. Hence there’s a bit of a selection effect hiding those people.
There are some people who arguably are in that corner who do participate in the debate, though—e.g. Robin Hanson. (He thinks some sort of AI will eventually be enormously important, but that the near-term effects, while significant, will not be at the level people on the right side think).
Looking at the 2x2 I posted I wonder if you could call the lower left corner something relating to “non-existential risks”. That seems to capture their views. It might be hard to come up with a catch term, though.
The upper left corner could maybe be called “sceptics”.
Thanks for this thoughtful article.
It seems to me that the first and the second examples have something in common, namely an underestimate of the degree to which people will react to perceived dangers. I think this is fairly common in speculations about potential future disasters, and have called it sleepwalk bias. It seems like something that one should be able to correct for.
I think there is an element of sleepwalk bias in the AI risk debate. See this post where I criticise a particular vignette.