Sure, but for output quality better than what humans could (ever) do to matter for the relative speed up, you have to argue about compute bottlenecks, not Amdahl’s law for just the automation itself!
I’m having trouble parsing this sentence… which may not be important – the rest of what you’ve said seems clear, so unless there’s a separate idea here that needs responding to then it’s fine.
It sounds like your actual objection is in the human-only, software-only time from superhuman coder to SAR (you think this would take more than 1.5-10 years).
Or perhaps your objection is that you think there will be a smaller AI R&D multiplier for superhuman coders. (But this isn’t relevant once you hit full automation!)
Agreed that these two statements do a fairly good job of characterizing my objection. I think the discussion is somewhat confused by the term “AI researcher”. Presumably, for an SAR to accelerate R&D by 25x, “AI researcher” needs to cover nearly all human activities that go into AI R&D? And even more so for SAIR/250x. While I’ve never worked at an AI lab, I presume that the full set of activities involved in producing better models is pretty broad, with tails extending into domains pretty far from the subject matter of an ML Ph.D and sometimes carried out by people whose job titles and career paths bear no resemblance to “AI researcher”. Is that a fair statement?
If “producing better models” (AI R&D) requires more than just narrow “AI research” skills, then either SAR and SAIR need to be defined to cover that broader skill set (in which case, yes, I’d argue that 1.5-10 years is unreasonably short for unaccelerated SC->SAR), or if we stick with narrower definitions for SAR and SAIR then, yes, I’d argue for smaller multipliers.
You said “This is valid for activities which benefit from speed and scale. But when output quality is paramount, speed and scale may not always provide much help?”. But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability. In order for quality to be a crux for a relative speed up, there needs to be some environmental constraint (like you can only run 1 experiment).
Is that a fair statement?
Yep, my sense is that an SAR has to[1] be better than humans at basically everything except vision.
(Given this, I currently expect that SAR comes at basically the same time as “superhuman blind remote worker”, at least when putting aside niche expertise which you can’t learn without a bunch of interaction with humans or the environment. I don’t currently have a strong view on the difficulty of matching human visual abilites, particulary at video processing, but I wouldn’t be super surprised if video processing is harder than basically everything else ultimately.)
If “producing better models” (AI R&D) requires more than just narrow “AI research” skills, then either SAR and SAIR need to be defined to cover that broader skill set (in which case, yes, I’d argue that 1.5-10 years is unreasonably short for unaccelerated SC->SAR),
It is defined to cover the broader set? It says “An AI system that can do the job of the best human AI researcher?” (Presumably this is implicitly “any of the best AI researchers which presumably need to learn misc skills as part of their jobs etc.) Notably, Superintelligent AI researcher (SIAR) happens after “superhuman remote worker” which requires being able to automate any work a remote worker could do.
I’m guessing your crux is that the time is too short?
“Has to” is maybe a bit strong, I think I probably should have said “will probably end up needing to be better competitive with the best human experts at basically everything (other than vision) and better at more central AI R&D given the realistic capability profile”. I think I generally expect full automation to hit everywhere all around the same time putting aside vision and physical tasks.
We now have several branches going, I’m going to consolidate most of my response in just one branch since they’re converting onto similar questions anyway. Here, I’ll just address this:
But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability.
I’m imagining that, at some intermediate stages of development, there will be skills for which AI does not even match human capability (for the relevant humans), and its outputs are of unusably low quality.
I’m having trouble parsing this sentence… which may not be important – the rest of what you’ve said seems clear, so unless there’s a separate idea here that needs responding to then it’s fine.
Agreed that these two statements do a fairly good job of characterizing my objection. I think the discussion is somewhat confused by the term “AI researcher”. Presumably, for an SAR to accelerate R&D by 25x, “AI researcher” needs to cover nearly all human activities that go into AI R&D? And even more so for SAIR/250x. While I’ve never worked at an AI lab, I presume that the full set of activities involved in producing better models is pretty broad, with tails extending into domains pretty far from the subject matter of an ML Ph.D and sometimes carried out by people whose job titles and career paths bear no resemblance to “AI researcher”. Is that a fair statement?
If “producing better models” (AI R&D) requires more than just narrow “AI research” skills, then either SAR and SAIR need to be defined to cover that broader skill set (in which case, yes, I’d argue that 1.5-10 years is unreasonably short for unaccelerated SC->SAR), or if we stick with narrower definitions for SAR and SAIR then, yes, I’d argue for smaller multipliers.
You said “This is valid for activities which benefit from speed and scale. But when output quality is paramount, speed and scale may not always provide much help?”. But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability. In order for quality to be a crux for a relative speed up, there needs to be some environmental constraint (like you can only run 1 experiment).
Yep, my sense is that an SAR has to[1] be better than humans at basically everything except vision.
(Given this, I currently expect that SAR comes at basically the same time as “superhuman blind remote worker”, at least when putting aside niche expertise which you can’t learn without a bunch of interaction with humans or the environment. I don’t currently have a strong view on the difficulty of matching human visual abilites, particulary at video processing, but I wouldn’t be super surprised if video processing is harder than basically everything else ultimately.)
It is defined to cover the broader set? It says “An AI system that can do the job of the best human AI researcher?” (Presumably this is implicitly “any of the best AI researchers which presumably need to learn misc skills as part of their jobs etc.) Notably, Superintelligent AI researcher (SIAR) happens after “superhuman remote worker” which requires being able to automate any work a remote worker could do.
I’m guessing your crux is that the time is too short?
“Has to” is maybe a bit strong, I think I probably should have said “will probably end up needing to be better competitive with the best human experts at basically everything (other than vision) and better at more central AI R&D given the realistic capability profile”. I think I generally expect full automation to hit everywhere all around the same time putting aside vision and physical tasks.
We now have several branches going, I’m going to consolidate most of my response in just one branch since they’re converting onto similar questions anyway. Here, I’ll just address this:
I’m imagining that, at some intermediate stages of development, there will be skills for which AI does not even match human capability (for the relevant humans), and its outputs are of unusably low quality.