I don’t think that the belief that godlike intelligence is necessary for human extinction via AI is a popular AI doomer position among people who are intellectually sophisticated. It’s more like those people hold complex position and it’s easy for people who are skeptics to frame this as “a popular position”.
Hang on, I don’t think I said that godlike intelligence was necessary for human extinction, and actually, didn’t make any claim about human extinction at all. This post was just about the possibility of an intelligence explosion, and I think “AI will reach godlike levels of intelligence” is an accurate description of the AI 2027 position.
You can’t conclude from the fact that inference scaling happened that most AI improvements are due to scaling.
Did you read the cited link that you quoted? Toby Ord’s argument was pretty convincing to me. What do you disagree with?
When it comes to inference it’s also worth noting that they found a lot of tricks to make inference cheaper. It’s not just more/better hardware
Hang on, I don’t think I said that godlike intelligence was necessary for human extinction, and actually, didn’t make any claim about human extinction at all. This post was just about the possibility of an intelligence explosion, and I think “AI will reach godlike levels of intelligence” is an accurate description of the AI 2027 position.
Did you read the cited link that you quoted? Toby Ord’s argument was pretty convincing to me. What do you disagree with?
Right, ending in about late 2024, which is why I specified (~late 2024) in “most recent gains”. It doesn’t seem like that trend has continued.