“Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach.”
It won’t be neurosymbolic.
Also I don’t see where the 2030 number is coming from. At this point my uncertainty is almost in the exponent again. Seems like decades is plausible (maybe <50% though).
It’s not clear that only one breakthrough is necessary.
Without an intelligence explosion, it’s around 2030 that scaling through increasing funding runs out of steam and slows down to the speed of chip improvement. This slowdown happens around the same time (maybe 2028-2034) even with a lot more commercial success (if that success precedes the slowdown), because scaling faster takes exponentially more money. So there’s more probability density of transformative advances before ~2030 than after, to the extent that scaling contributes to this probability.
That’s my reason to see 2030 as a meaningful threshold, Thane Ruthenis might be pointing to it for different reasons. It seems like it should certainly be salient for AGI companies, so a long timelines argument might want to address their narrative up to 2030 as a distinct case.
I also found that take very unusual, especially when combined with this:
Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.
The last sentence seems extremely overconfident, especially combined with the otherwise bearish conclusions in this post. I’m surprised no one else has mentioned it.
I give it ~70%, except caveats:
It won’t be neurosymbolic.
Also I don’t see where the 2030 number is coming from. At this point my uncertainty is almost in the exponent again. Seems like decades is plausible (maybe <50% though).
It’s not clear that only one breakthrough is necessary.
Without an intelligence explosion, it’s around 2030 that scaling through increasing funding runs out of steam and slows down to the speed of chip improvement. This slowdown happens around the same time (maybe 2028-2034) even with a lot more commercial success (if that success precedes the slowdown), because scaling faster takes exponentially more money. So there’s more probability density of transformative advances before ~2030 than after, to the extent that scaling contributes to this probability.
That’s my reason to see 2030 as a meaningful threshold, Thane Ruthenis might be pointing to it for different reasons. It seems like it should certainly be salient for AGI companies, so a long timelines argument might want to address their narrative up to 2030 as a distinct case.
I also found that take very unusual, especially when combined with this:
The last sentence seems extremely overconfident, especially combined with the otherwise bearish conclusions in this post. I’m surprised no one else has mentioned it.
Yeah, I agree—overall I agree pretty closely with Thane about LLMs but his final conclusions don’t seem to follow from the model presented here.