Does he not believe in AGI and Superintelligence at all? Why not just say that?
AI could cure all diseases and “solve energy”. He mentions “radical abundance” as a possibility as well, but beyond the R&D channel
This is clearly about Superintelligence and the mechanism through which it will happen in that scenario is straightforward and often talked about. And if he disagrees he either doesn’t believe in AGI (or at least advanced AGI) or believes that solving energy, curing disease is not that valuable? Or he is purposefully talking about a pre-AGI scenario while arguing against post-AGI views?
Does he not believe in AGI and Superintelligence at all? Why not just say that?
As one of the authors, I’ll answer for myself. Unfortunately, I’m not exactly sure what these terms mean precisely, so I’ll answer a different question instead. If your question is whether I believe that AIs will eventually match or surpass human performance—either collectively or individually—across the full range of tasks that humans are capable of performing, then my answer is yes. I do believe that, in the long run, AI systems will reach or exceed human-level performance across virtually all domains of ability.
However, I fail to see how this belief directly supports the argument you are making in your comment. Even if we accept that AIs will eventually be highly competent across essentially all important tasks, that fact alone does not straightforwardly imply that our core thesis—that the main value from AI will come from broad automation rather than the automation of R&D—is incorrect.
Do you truly not believe that for your own ljfe—to use the examples there—solving aging, curing all disease, solving energy isn’t even more valuable? To you? Perhaps you don’t believe those possible but then that’s where the whole disagreement lies.
And if you are talking about Superintelligent AGI and automation why even talk about output per person? I thought you at least believe people are automated out and thus decoupled?
It’s important to be precise about the specific claim we’re discussing here.
The claim that R&D is less valuable than broad automation is not equivalent to the claim that technological progress itself is less important than other forms of value. This is because technological progress is sustained not just by explicit R&D but by large-scale economic forces that complement the R&D process, such as general infrastructure, tools, and complementary labor used to support the invention, implementation, and deployment of various technologies. These complementary factors make it possible to both run experiments that enable the development of technologies and diffuse these technologies widely after they are developed in a laboratory environment—providing straightforwardly large value.
To provide a specific operationalization of our thesis, we can examine the elasticity of economic output with respect to different inputs—that is, how much economic value increases when a particular input to economic output is scaled. The thesis here is that automating R&D alone would, by itself, raise output by significantly less than automating labor broadly (separately from R&D). This is effectively what we mean when we say R&D has “less value” than broad automation.
Does he not believe in AGI and Superintelligence at all? Why not just say that?
This is clearly about Superintelligence and the mechanism through which it will happen in that scenario is straightforward and often talked about. And if he disagrees he either doesn’t believe in AGI (or at least advanced AGI) or believes that solving energy, curing disease is not that valuable? Or he is purposefully talking about a pre-AGI scenario while arguing against post-AGI views?
This quote certainly suggests this. It’s just hard to tell if this is due to bad reasoning or on purpose to promote his start-up.
As one of the authors, I’ll answer for myself. Unfortunately, I’m not exactly sure what these terms mean precisely, so I’ll answer a different question instead. If your question is whether I believe that AIs will eventually match or surpass human performance—either collectively or individually—across the full range of tasks that humans are capable of performing, then my answer is yes. I do believe that, in the long run, AI systems will reach or exceed human-level performance across virtually all domains of ability.
However, I fail to see how this belief directly supports the argument you are making in your comment. Even if we accept that AIs will eventually be highly competent across essentially all important tasks, that fact alone does not straightforwardly imply that our core thesis—that the main value from AI will come from broad automation rather than the automation of R&D—is incorrect.
Do you truly not believe that for your own ljfe—to use the examples there—solving aging, curing all disease, solving energy isn’t even more valuable? To you? Perhaps you don’t believe those possible but then that’s where the whole disagreement lies.
And if you are talking about Superintelligent AGI and automation why even talk about output per person? I thought you at least believe people are automated out and thus decoupled?
It’s important to be precise about the specific claim we’re discussing here.
The claim that R&D is less valuable than broad automation is not equivalent to the claim that technological progress itself is less important than other forms of value. This is because technological progress is sustained not just by explicit R&D but by large-scale economic forces that complement the R&D process, such as general infrastructure, tools, and complementary labor used to support the invention, implementation, and deployment of various technologies. These complementary factors make it possible to both run experiments that enable the development of technologies and diffuse these technologies widely after they are developed in a laboratory environment—providing straightforwardly large value.
To provide a specific operationalization of our thesis, we can examine the elasticity of economic output with respect to different inputs—that is, how much economic value increases when a particular input to economic output is scaled. The thesis here is that automating R&D alone would, by itself, raise output by significantly less than automating labor broadly (separately from R&D). This is effectively what we mean when we say R&D has “less value” than broad automation.