Thanks for linking. I skimmed the early part of this post because you labelled it explicitly as viewpoints. Then I see that you engaged with a bunch of arguments about short timelines, but they are all pretty weak/old ones that I never found very convincing (the only exception is that bio anchors gave me an early ceiling early on around 1e40 FLOP for compute needed to make AGI). Then you got to LLMs and acknowledged:
The existence of today’s LLMs is scary and should somewhat shorten people’s expectations about when AGI comes.
But then gave a bunch of points about the things LLMs are missing and suck at, which I already agree with.
Aside: Have I mischaracterized so far? Please let me know if so.
So, do you think you have arguments against the ‘benchmarks+gaps argument’ for timelines to AI research automation, or why AI research automation won’t translate to much algorithmic progress? Or any of the other things that I listed as ones that moved my timelines down:
Fun with +12 OOMs of Compute IMO, a pretty compelling writeup that brought my ‘timelines to AGI uncertainty-over-training-compute-FLOP’ down a bunch to around 1e35 FLOP
These arguments are so nonsensical that I don’t know how to respond to them without further clarification, and so far the people I’ve talked to about them haven’t provided that clarification. “Programming” is not a type of cognitive activity any more than “moving your left hand in some manner” is. You could try writing out the reasoning, trying to avoid enthymemes, and then I could critique it / ask followup questions. Or we could have a conversation that we record and publish.
https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
Thanks for linking. I skimmed the early part of this post because you labelled it explicitly as viewpoints. Then I see that you engaged with a bunch of arguments about short timelines, but they are all pretty weak/old ones that I never found very convincing (the only exception is that bio anchors gave me an early ceiling early on around 1e40 FLOP for compute needed to make AGI). Then you got to LLMs and acknowledged:
But then gave a bunch of points about the things LLMs are missing and suck at, which I already agree with.
Aside: Have I mischaracterized so far? Please let me know if so.
So, do you think you have arguments against the ‘benchmarks+gaps argument’ for timelines to AI research automation, or why AI research automation won’t translate to much algorithmic progress? Or any of the other things that I listed as ones that moved my timelines down:
Fun with +12 OOMs of Compute IMO, a pretty compelling writeup that brought my ‘timelines to AGI uncertainty-over-training-compute-FLOP’ down a bunch to around 1e35 FLOP
Researching how much compute is scaling.
The benchmarks+gaps argument to partial AI research automation
The takeoff forecast for how partial AI research automation will translate to algorithmic progress.
The recent trend in METR’s time horizon data.
These arguments are so nonsensical that I don’t know how to respond to them without further clarification, and so far the people I’ve talked to about them haven’t provided that clarification. “Programming” is not a type of cognitive activity any more than “moving your left hand in some manner” is. You could try writing out the reasoning, trying to avoid enthymemes, and then I could critique it / ask followup questions. Or we could have a conversation that we record and publish.