I have noticed two important centers of AI capability denial, both of which involve highly educated people. One group consists of progressives for whom AI doom is a distraction from politics. The other group consists of accelerationists who only think of AI as empowering humans.
This does not refer to all progressives or all accelerationists. Most AI safety researchers and activists are progressives. Many accelerationists do acknowledge that AI could break away from humanity. But in both cases, there are clear currents of thought that deny e.g. that superintelligence is possible or imminent.
On the progressive side, I attribute the current of denial to a kind of humanism. First, their activism is directed against corporate power (etc) in the name of a more human society, and concern about AI doom just doesn’t fit the paradigm. Second, they dislike the utopian futurism which is the flipside of AI doom, because it reminds them of religion. The talking points which circulate seem to come from intellectuals and academics.
On the accelerationist side, it’s more about believing that pressing ahead with AI will just help human beings achieve their dreams. It’s an optimistic view and for many it’s their business model, so there can be elements of marketing and hype. The deepest talking points here seem to come from figures within the AI industry like Yann LeCun.
Maybe a third current of denial is that which says superintelligence won’t happen thanks to a combination of technical and economic contingencies—scaling has hit its limits, or the bubble is going to burst.
One might have supposed that religion would also be a source of capability denial, but I don’t see it playing an important role so far. The way things are going, the religious response is more likely to be a declaration that AGI is evil, rather than impossible.
I have noticed two important centers of AI capability denial, both of which involve highly educated people. One group consists of progressives for whom AI doom is a distraction from politics. The other group consists of accelerationists who only think of AI as empowering humans.
This does not refer to all progressives or all accelerationists. Most AI safety researchers and activists are progressives. Many accelerationists do acknowledge that AI could break away from humanity. But in both cases, there are clear currents of thought that deny e.g. that superintelligence is possible or imminent.
On the progressive side, I attribute the current of denial to a kind of humanism. First, their activism is directed against corporate power (etc) in the name of a more human society, and concern about AI doom just doesn’t fit the paradigm. Second, they dislike the utopian futurism which is the flipside of AI doom, because it reminds them of religion. The talking points which circulate seem to come from intellectuals and academics.
On the accelerationist side, it’s more about believing that pressing ahead with AI will just help human beings achieve their dreams. It’s an optimistic view and for many it’s their business model, so there can be elements of marketing and hype. The deepest talking points here seem to come from figures within the AI industry like Yann LeCun.
Maybe a third current of denial is that which says superintelligence won’t happen thanks to a combination of technical and economic contingencies—scaling has hit its limits, or the bubble is going to burst.
One might have supposed that religion would also be a source of capability denial, but I don’t see it playing an important role so far. The way things are going, the religious response is more likely to be a declaration that AGI is evil, rather than impossible.