You said “This is valid for activities which benefit from speed and scale. But when output quality is paramount, speed and scale may not always provide much help?”. But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability. In order for quality to be a crux for a relative speed up, there needs to be some environmental constraint (like you can only run 1 experiment).
Is that a fair statement?
Yep, my sense is that an SAR has to[1] be better than humans at basically everything except vision.
(Given this, I currently expect that SAR comes at basically the same time as “superhuman blind remote worker”, at least when putting aside niche expertise which you can’t learn without a bunch of interaction with humans or the environment. I don’t currently have a strong view on the difficulty of matching human visual abilites, particulary at video processing, but I wouldn’t be super surprised if video processing is harder than basically everything else ultimately.)
If “producing better models” (AI R&D) requires more than just narrow “AI research” skills, then either SAR and SAIR need to be defined to cover that broader skill set (in which case, yes, I’d argue that 1.5-10 years is unreasonably short for unaccelerated SC->SAR),
It is defined to cover the broader set? It says “An AI system that can do the job of the best human AI researcher?” (Presumably this is implicitly “any of the best AI researchers which presumably need to learn misc skills as part of their jobs etc.) Notably, Superintelligent AI researcher (SIAR) happens after “superhuman remote worker” which requires being able to automate any work a remote worker could do.
I’m guessing your crux is that the time is too short?
“Has to” is maybe a bit strong, I think I probably should have said “will probably end up needing to be better competitive with the best human experts at basically everything (other than vision) and better at more central AI R&D given the realistic capability profile”. I think I generally expect full automation to hit everywhere all around the same time putting aside vision and physical tasks.
We now have several branches going, I’m going to consolidate most of my response in just one branch since they’re converting onto similar questions anyway. Here, I’ll just address this:
But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability.
I’m imagining that, at some intermediate stages of development, there will be skills for which AI does not even match human capability (for the relevant humans), and its outputs are of unusably low quality.
You said “This is valid for activities which benefit from speed and scale. But when output quality is paramount, speed and scale may not always provide much help?”. But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability. In order for quality to be a crux for a relative speed up, there needs to be some environmental constraint (like you can only run 1 experiment).
Yep, my sense is that an SAR has to[1] be better than humans at basically everything except vision.
(Given this, I currently expect that SAR comes at basically the same time as “superhuman blind remote worker”, at least when putting aside niche expertise which you can’t learn without a bunch of interaction with humans or the environment. I don’t currently have a strong view on the difficulty of matching human visual abilites, particulary at video processing, but I wouldn’t be super surprised if video processing is harder than basically everything else ultimately.)
It is defined to cover the broader set? It says “An AI system that can do the job of the best human AI researcher?” (Presumably this is implicitly “any of the best AI researchers which presumably need to learn misc skills as part of their jobs etc.) Notably, Superintelligent AI researcher (SIAR) happens after “superhuman remote worker” which requires being able to automate any work a remote worker could do.
I’m guessing your crux is that the time is too short?
“Has to” is maybe a bit strong, I think I probably should have said “will probably end up needing to be better competitive with the best human experts at basically everything (other than vision) and better at more central AI R&D given the realistic capability profile”. I think I generally expect full automation to hit everywhere all around the same time putting aside vision and physical tasks.
We now have several branches going, I’m going to consolidate most of my response in just one branch since they’re converting onto similar questions anyway. Here, I’ll just address this:
I’m imagining that, at some intermediate stages of development, there will be skills for which AI does not even match human capability (for the relevant humans), and its outputs are of unusably low quality.