Yeah, sorry I didn’t mean to argue that Amdahl’s Law and Hofstadter’s Law are irrelevant, or that things are unlikely to go slowly.
I see a big chance that it takes a long time, and that I end up saying you were right and I was wrong.
However, if you’re talking about “contemplating the capabilities of something that is not a full ASI. Today’s models have extremely jagged capabilities, with lots of holes, and (I would argue) they aren’t anywhere near exhibiting sophisticated high-level planning skills able to route around their own limitations.”
That seems to apply to the 2027 “Superhuman coder” with 5x speedup, not the “Superhuman AI researcher” with 25x speedup or “Superintelligent AI researcher” with 250x.
I think “routing around one’s own limitations” isn’t necessarily that sophisticated. Even blind evolution does it, by trying something else when one thing fails.
As long as the AI is “smart enough,” even if they aren’t that superhuman, they have the potential to think many times faster than a human, with a “population” many times greater than that of AI researchers. They can invent a lot more testable ideas and test them all.
Maybe I’m missing the point, but it’s possible that we simply disagree on whether the point exists. You believe that merely discovering technologies and improving algorithms isn’t sufficient to build ASI, while I believe there is a big chance that doing that alone will be sufficient. After discovering new technologies from training smaller models, they may still need one or two large training runs to implement it all.
I’m not arguing that you don’t have a good insights :)
Yeah, sorry I didn’t mean to argue that Amdahl’s Law and Hofstadter’s Law are irrelevant, or that things are unlikely to go slowly.
I see a big chance that it takes a long time, and that I end up saying you were right and I was wrong.
However, if you’re talking about “contemplating the capabilities of something that is not a full ASI. Today’s models have extremely jagged capabilities, with lots of holes, and (I would argue) they aren’t anywhere near exhibiting sophisticated high-level planning skills able to route around their own limitations.”
That seems to apply to the 2027 “Superhuman coder” with 5x speedup, not the “Superhuman AI researcher” with 25x speedup or “Superintelligent AI researcher” with 250x.
I think “routing around one’s own limitations” isn’t necessarily that sophisticated. Even blind evolution does it, by trying something else when one thing fails.
As long as the AI is “smart enough,” even if they aren’t that superhuman, they have the potential to think many times faster than a human, with a “population” many times greater than that of AI researchers. They can invent a lot more testable ideas and test them all.
Maybe I’m missing the point, but it’s possible that we simply disagree on whether the point exists. You believe that merely discovering technologies and improving algorithms isn’t sufficient to build ASI, while I believe there is a big chance that doing that alone will be sufficient. After discovering new technologies from training smaller models, they may still need one or two large training runs to implement it all.
I’m not arguing that you don’t have a good insights :)