I’ll grant that Ajeya was misrepresented in this post, and I’ll probably either edit or remove the section.
My model after 2029--what are you referring to? I currently think that probably we’ll have superintelligence by 2029. I definitely agree that if I’m wrong about that and AGI is a lot harder to build than I think, progress in AI will be slowing down significantly around 2030 relative to today’s pace.
This isn’t a crux on why I believe AI to be safe, but I think my potential disagreement is that once you manage to reach the human compute and memory regime, I do expect it to be more difficult to scale upwards.
I definitely assign some credence to you being right, so I’ll probably edit or remove that section.
I’ll grant that Ajeya was misrepresented in this post, and I’ll probably either edit or remove the section.
This isn’t a crux on why I believe AI to be safe, but I think my potential disagreement is that once you manage to reach the human compute and memory regime, I do expect it to be more difficult to scale upwards.
I definitely assign some credence to you being right, so I’ll probably edit or remove that section.