[Question] How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century?

This question is inspired by 1a3orn’s comment on how there are troubling signs of epistemic issues in LW’s Alignment field.

I’ll quote the comment here to tell you what I mean:

I think that the above is also a good explanation for why many ML engineers working on AI or AGI don’t see any particular reason to engage with or address arguments about high p(doom).

When from a distance one views a field that:

Has longstanding disagreements about basic matters

Has theories—but many of the theories have not resulted in really any concrete predictions that differentiate from standard expectations, despite efforts to do so.

Will continue to exist regardless of how well you criticize any one part of it.

There’s basically little reason to engage with it. These are all also evidence that there’s something epistemically off with what is going on in the field.

Maybe this evidence is wrong! But I do think that it is evidence, and not-weak evidence, and it’s very reasonable for a ML engineer to not deeply engage with arguments because of it.

So I want to ask a question: How seriously should we take the hypothesis that LW is totally wrong on AI?

Specifically, this splits into several subquestions:

  1. What’s the chance that AI doesn’t have that much of an impact on the world by 2100?

  2. What’s the chance that we do have massive impacts, but alignment is so easy that standard ML techniques work?

  3. How well does the epistemic process on LW work? Are there any changes you would make to LW’s epistemic processes?

I welcome all answers, and I especially welcome any critics of LW/​negative answers to at least answer one of the questions I have.

Edit: For people that don’t have a specific scenario in mind, I’ll ask a specific question. It doesn’t have to be answered, but any answers on this question are appreciated, especially from critics of the “AI is significant” idea.

1a. What probability will the Explosion or Collapse scenario from Cold Takes happen by 2100?

Link to the scenarios below:

https://​​www.cold-takes.com/​​this-cant-go-on/​​