[Question] How did LW update p(doom) after LLMs blew up?

Here’s something which makes me feel very much as if I’m in a cult:

After LLMs became a massive thing, I’ve heard a lot of people p(doom) on the basis that we were in shorter timelines.

How have we updated p(doom) on the idea that LLMs are very different than hypothesized AI?

Firstly, it would seem to me to be much more difficult to FOOM with an LLM, it would seem much more difficult to create a superintelligence in the first place, and it seems like getting them to act creatively and be reliable are going to be much harder problems than making sure they aren’t too creative.

LLMs often default to human wisdom on topics, the way we’re developing them with AutoGPT they can’t even really think privately, if you had to imagine a better model of AI for a disorganized species to trip into, could you get safer than LLMs?

Maybe I’ve just not been looking the right places to see how the discourse has changed, but it seems like we’re spending all the weirdness points on preventing the training of a language model that at the end of the day will be slightly better than GPT-4.

I will bet any amount of money that GPT-5 will not kill us all.