I’ll try to summarize your point, as I understand it:
Intelligence is just one of many components. If you get huge amounts of intelligence, at that point you will be bottlenecked by something else, and even more intelligence will not help you significantly. (Company R&D doesn’t bring a “research explosion”.)
The core idea I’m trying to propose (but seem to have communicated poorly) is that the AI self-improvement feedback loop might (at some point) converge, rather than diverging. In very crude terms, suppose that GPT-8 has IQ 180, and we use ten million instances of it to design GPT-9, then perhaps we get a system with IQ 190. Then we use ten million instances of GPT-9 to design GPT-10, perhaps that has IQ 195, and eventually GPT-∞ converges at IQ 200.
I do not claim this is inevitable, merely that it seems possible, or at any rate is not ruled out by any mathematical principle. It comes down to an empirical question of how much incremental R&D effort is needed to achieve each incremental increase in AI capability.
The point about the possibility of bottlenecks other than intelligence feeds into that question about R&D effort vs. increase in capability; if we double R&D effort but are bottlenecked on, say, training data, than we might get a disappointing increase in capability.
IIUC, much of the argument you’re making here is that the existing dynamic of IP laws, employee churn, etc. puts a limit on the amount of R&D investment that any given company is willing to make, and that these incentives might soon shift in a way that could unleash a drastic increase in AI R&D spending? That seems plausible, but I don’t see how it ultimately changes the slope of the feedback loop – it merely allows for a boost up the early part of the curve?
Also, please note that LLMs are just one possible paradigm of AI. Yes, currently the best one, but who knows what tomorrow may bring. I think most people among AI doomers would agree that LLMs are not the kind of AI they fear. LLMs succeed to piggyback on humanity’s written output, but they are also bottlenecked by it.
Agreed that there’s a very good chance that AGI may not look all that much like an LLM. And so when we contemplate the outcome of recursive self-improvement, a key question will be what the R&D vs. increase-in-capability curve looks like for whatever architecture emerges.
I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.
The core idea I’m trying to propose (but seem to have communicated poorly) is that the AI self-improvement feedback loop might (at some point) converge, rather than diverging. In very crude terms, suppose that GPT-8 has IQ 180, and we use ten million instances of it to design GPT-9, then perhaps we get a system with IQ 190. Then we use ten million instances of GPT-9 to design GPT-10, perhaps that has IQ 195, and eventually GPT-∞ converges at IQ 200.
I do not claim this is inevitable, merely that it seems possible, or at any rate is not ruled out by any mathematical principle. It comes down to an empirical question of how much incremental R&D effort is needed to achieve each incremental increase in AI capability.
The point about the possibility of bottlenecks other than intelligence feeds into that question about R&D effort vs. increase in capability; if we double R&D effort but are bottlenecked on, say, training data, than we might get a disappointing increase in capability.
IIUC, much of the argument you’re making here is that the existing dynamic of IP laws, employee churn, etc. puts a limit on the amount of R&D investment that any given company is willing to make, and that these incentives might soon shift in a way that could unleash a drastic increase in AI R&D spending? That seems plausible, but I don’t see how it ultimately changes the slope of the feedback loop – it merely allows for a boost up the early part of the curve?
Agreed that there’s a very good chance that AGI may not look all that much like an LLM. And so when we contemplate the outcome of recursive self-improvement, a key question will be what the R&D vs. increase-in-capability curve looks like for whatever architecture emerges.
I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.