It is an argument by induction based on a naive extrapolation of a historic trend.
This characterization could be a good first step to construct a convincing counter argument. Are there examples of other arguments by induction that simply extrapolate historic trends, where it is much more apparent that it is an unreliable form of reasoning? To be intuitive it must not be too technical, e.g. “people claiming to have found a proof to Fermat’s last theorem have always been wrong in the past (until Andrew Wiles came along)” would probably not work well.
There seems to be a clear pattern of various people downplaying AGI risk on the basis of framing it as mere speculation, science fiction, hysterical, unscientific, religious, and other variations of the idea that it is not based on sound foundations, especially when it comes to claims of considerable existential risk. One way to respond to that is by pointing at existing examples of cutting-edge AI systems showing unintended or at least unexpected/unintuitive behavior. Has someone made a reference collection of such examples that are suitable for grounding speculations in empirical observations?
With “unintended” I’m roughly thinking of examples like the repeatedly used video of a ship going in circles to continually collect points instead of finishing a game. With “unexpected/unintuitive” I have in mind examples like AlphaGo surpassing 3000 years of collective human cognition in a very short time by playing against itself, clearly demonstrating the non-optimality of our cognition, at least in a narrow domain.