[Question] Best arguments against the outside view that AGI won’t be a huge deal, thus we survive.

Specifically, against the following view described by a comment:

There seems to be a lack of emphasis in this market on outcomes where alignment is not solved, yet humanity turns out fine anyway. Based on an Outside View perspective (where we ignore any specific arguments about AI and just treat it like any other technology with a lot of hype), wouldn’t one expect this to be the default outcome?

Take the following general heuristics:

If a problem is hard, it probably won’t be solved on the first try.

If a technology gets a lot of hype, people will think that it’s the most important thing in the world even if it isn’t. At most, it will only be important on the same level that previous major technological advancements were important.

People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.

If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.

This, if applied to AGI, leads to the following conclusions:

Nobody manages to completely solve alignment.

This isn’t a big deal, as AGI turns out to be disappointingly not that powerful anyway (or at most “creation of the internet” level influential but not “disassemble the planet’s atoms” level influential)

I would expect the average person outside of AI circles to default to this kind of assumption.

Ideally, details are provided for why the outside view presented here is less favored on the evidence than the idea that AGI or PASTA will be a big deal, as popularized by Holden Karnofsky. Also, ideally you can estimate how much impact AI will have say, this century.

Motivation: I’m asking this question because one thing I notice is that there’s the unstated assumption that AGI/​AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I’d really like to know why LWers hold that AGI/​ASI will be a big deal.