So to be clear, I am not suggesting that a foom is impossible. The title of the post contains the phrase “might never happen”.
I guess you might reasonably argue that, from the perspective of (say) a person living 20,000 years ago, modern life does in fact sit on the far side of a singularity. When I see the word ‘singularity’, I think of the classic Peace War usage of technology spiraling to effectively infinity, or at least far beyond present-day technology. I suppose that led me to be a bit sloppy in my use of the term.
The point I was trying to make by referencing those various historical events is that all of the feedback loops in question petered out short of a Vingian singularity. And it’s a fair correction that some of those loops are actually still in play. But many are not – forest fires burn out, the Cambrian explosion stopped exploding – so we do have existence proofs that feedback loops can come to a halt. I know that’s not any big revelation, I was merely attempting to bring the concept to mind in the context of RSI.
In any case, all I’m really trying to do is to argue that the following syllogism is invalid:
As AI approaches human level, it will be able to contribute to AI R&D, thus increasing the pace of AI improvement.
This process can be repeated indefinitely.
Therefore, as soon as AI is able to meaningfully contribute to its own development, we will quickly spiral to a Vingian singularity.
This scenario is certainly plausible, but I frequently see it treated as a mathematical certainty. And that is simply not the case. The improvement cycle will only exhibit a rapid upward spiral under certain assumptions regarding the relationship of R&D inputs to gains in AI capability – the r term in Davidson’s model.
(Then I spend some time explaining why I think r might be lower than expected during the period where AI is passing through human level. Again, “might be”.)
So to be clear, I am not suggesting that a foom is impossible. The title of the post contains the phrase “might never happen”.
I guess you might reasonably argue that, from the perspective of (say) a person living 20,000 years ago, modern life does in fact sit on the far side of a singularity. When I see the word ‘singularity’, I think of the classic Peace War usage of technology spiraling to effectively infinity, or at least far beyond present-day technology. I suppose that led me to be a bit sloppy in my use of the term.
The point I was trying to make by referencing those various historical events is that all of the feedback loops in question petered out short of a Vingian singularity. And it’s a fair correction that some of those loops are actually still in play. But many are not – forest fires burn out, the Cambrian explosion stopped exploding – so we do have existence proofs that feedback loops can come to a halt. I know that’s not any big revelation, I was merely attempting to bring the concept to mind in the context of RSI.
In any case, all I’m really trying to do is to argue that the following syllogism is invalid:
As AI approaches human level, it will be able to contribute to AI R&D, thus increasing the pace of AI improvement.
This process can be repeated indefinitely.
Therefore, as soon as AI is able to meaningfully contribute to its own development, we will quickly spiral to a Vingian singularity.
This scenario is certainly plausible, but I frequently see it treated as a mathematical certainty. And that is simply not the case. The improvement cycle will only exhibit a rapid upward spiral under certain assumptions regarding the relationship of R&D inputs to gains in AI capability – the
r
term in Davidson’s model.(Then I spend some time explaining why I think
r
might be lower than expected during the period where AI is passing through human level. Again, “might be”.)